Joan Warnow-Blewett, Anthony J. Capitos, Joel Genuth, and Spencer R. Weart

With contributions by
Frederik Nebeker, Lynne Zucker, and Michael Darby


Part A: Space Science

Part B: Geophysics And Oceanography

AIP Working Group for Documenting Multi-Institutional Collaborations in Space Science and Geophysics


(By Joel Genuth)

[Table of Contents]
This essay serves two overlapping purposes. First, it discusses those aspects of multi-institutional collaborations in space science that are most important to generating or locating documents of likely interest to historians of science and technology. In this sense, it provides the empirical perspective necessary for archival analysis and appraisal guidelines to be well grounded in the realities of recent research. Second, it offers observations on where the institutional framework of the government-funded, multi-institutional collaboration has seemed to affect (or leave undisturbed) the social relations and acquisition of expertise that are necessary for the pursuit of scientific research. In this sense it provides a preliminary perspective on social patterns and changes within a scientific community.

The foundation for this essay is 102 interviews with participants in six multi-institutional collaborations in space science. All the collaborations contributed to spacecraft that were launched between 1975 and 1985. (In the terminology of the field, "project" refers to the collaborative effort to launch, operate, and analyze data from spacecraft; we will henceforth use "project" in the space scientists' sense.) In our choice of projects to study, the American Institute of Physics staff and consultants consciously tried to cover a range of features: projects managed by different space flight centers, projects whose participating scientists came from a variety of institutions, international and nationally organized projects, astrophysical and planetary science projects, and smaller and larger projects. In our choice of interviewees, the AIP staff sought to cover all the types of people who might be vital to the documentation of scientific work, from administrators at funding agencies to graduate students at university departments. The strategy was to learn a little about a lot in the belief that broad exposure was essential to producing sound recommendations for archivists and policy makers. We also aimed to provide a context that others scholars can compare with their own case studies, which will likely remain the dominant mode of inquiry into "big science."

The structure of the essay owes much to the large overlaps and infrequent divergences in the two major purposes. The first three sections, Project Formation, Project Organization and Management, and Activities of Experiment Teams, roughly place comparisons and contrasts among the projects in the narrative categories of beginning, middle, and end. The last four sections, Funding, Internationalism, Careers, and Communication, put the comparisons and contrasts in non-narrative functional contexts in order to highlight the structural similarities and differences among the cases. There are overlaps in the content among the two sets of sections; the fuller exposition is in the narrative categories, which contain some material not found elsewhere.

The preeminent finding of the historical analysis are that National Aeronautics and Space Administration (NASA) and European Space Agency (ESA) headquarters and flight centers impose a formal structure on space science projects. However, projects formed for a variety of reasons and in a variety of ways, and participating scientists and engineers have been able to modify the mandated structure to fit their circumstances. Space science projects, during the time period we covered, originated both inside and outside the flight centers, and scientists joined projects through both formal competitions and informal recruitments by "network- ing" organizers. These differences affected who within the projects collected information, the breadth of scientific interests participating in the projects (and through them, the disciplinary communities addressed by the projects), and the level of prior working relations that participants brought to the projects. On this last point, we concur with the consulting sociologist, Lynne Zucker, that building "trust" is important to the performance of these collaborations.

[Table of Contents]
The origins of our selected projects go back in time as far as the mid 1960s. Only in the two earliest [IUE and Einstein][1] was there confusion and conflict over the institutional settings of expertise needed for successful space-based research. For the rest, problems of rocketry, spacecraft structure, thermal balance, power supplies, spacecraft operations, and telemetry had all come to be understood as "engineering" and under the control of engineers, even though engineers lacked professional interests in making and analyzing space-based observations. To participate in the design of a space project, scientists have had to be members of institutions that had engineers or could contract with engineers to address these topics. Furthermore, since the point of designing a project has been to bring about its fabrication, project-creating scientists have needed to be aligned with engineering institutions with which funding agencies have wanted to do business.

The NASA space flight centers and the European Space Research and Technology Centre (ESTEC), the lone ESA space flight center, have made it their business to take responsibility for the engineering essential to space projects, and the scientists at NASA centers have made it their business to agitate for projects they believed would serve the interests of their centers and a scientific community. (Scientists at ESTEC are less entrepreneurial but can provide important organizational support to external scientists campaigning for a project.) The flight centers' prominence as homes to space project expertise have enabled the agencies to place a modicum of bureaucratic formality on the formation of projects. In "Phase A," studies have been commissioned to ascertain the technical feasibility of a project that has attracted interest through informal study and thus seems credible to flight center and headquarters officials. Desirable results have led to "Phase B," in which spacecraft designs are refined and construction costs estimated, with the goal of receiving an authorization to build the spacecraft and their payload(s) of scientific instruments.

While flight centers have no doubt been the most significant institutional generators of space science projects during the period we covered, they did not monopolize the necessary resources for creating such projects[2]. In the United States, a few research laboratories, through Defense patronage, have combined spacecraft-construction capabilities with scientific expertise in the design of research instruments to be flown on spacecraft (e.g., Johns Hopkins Applied Physics Laboratory and American Science and Engineering in our case studies). In Europe research institutes have likewise acquired that combination of capabilities with the support of individual national governments (e.g., Rutherford Appleton Laboratory, Max Planck Institute for Extra-terrestrial Physics). Scientists in university departments, however, lacked the resources to play a role in the creation of our selected projects. (It is not clear whether this result is an artifact of our selection of case studies; one obvious counter-example is that Robert Smith found Lyman Spitzer of Princeton to have been the central proponent of the Hubble Space Telescope[3].) The only corporate scientists to instigate a project shifted their institutional base to a linked government-university setting (American Science & Engineering scientists on Einstein). Our anecdotal evidence suggests that, with idiosyncratic exceptions[4], the individual universities that tried to acquire and maintain spacecraft-design expertise eventually found the endeavor too expensive and therefore specialized in the design and use of research instruments. Conversely, aerospace firms have specialized in spacecraft construction, because their research groups developing scientific instrumentation could not be both profitable and competitive with universities charging lower overhead rates and employing graduate students for skilled labor.

Given the truism that space projects form at the intersection of scientific opportunities with engineering capabilities, our case studies reveal a variety of inspirations for forming projects and a variety of channels through which social connections have been made among scientists and between scientists and engineers. Two principal distinctions characterize our selected projects in their formative stage. Four of the six projects came to the attention of headquarters decision-makers through the flight centers [IUE, Voyager, ISEE, and Giotto] while the other two originated on the outside [Einstein and AMPTE]. Four of the six cases—including the two that formed outside the flight centers—took their scientific impetus from prospects for improved measurements of physical processes [Einstein, AMPTE, IUE, ISEE]. The other two relied on opportunities created by rare astronomical configurations [Voyager and Giotto]. When flight center staff members instigated measurement-improving projects, they had to demonstrate their projects' engineering credibility, whereas when offering a spacecraft to investigate a rare natural event, they had to demonstrate scientific credibility. In either case, instigators have needed to convince an advocate at agency headquarters of the project's viability within the politics of the agency's budget. When outside scientists instigated projects with little help from a flight center, their principal difficulty was to find a route into the politics of the agency's budget. The existence of the projects we have studied attests to the wisdom, flexibility, or luck of their instigators in dealing with the representatives of engineering, scientific, and institutional interests in the politics of funding.

The flight center instigators for projects pursuing better measurements have been scientists in charge of flight center laboratories or branches. To turn ideas into designs, scientist-instigators had to sell flight center engineers on their ideas, since the engineers would be busy with extant projects and subject to recruitment by other would-be project instigators. In our cases, these in-house campaigns for a project were conducted informally. Scientist-instigators would directly talk up their ideas with engineers, lobby upper-level center administrators (whose encouragement would be of obvious utility in enlisting talent), and organize on-site meetings of interested external scientists (whose presence indicated a serious interest in a project). In one case [ISEE], an engineer helping with preliminary design studies recalled that a scientist-instigator organized a meeting of external scientists at the flight center to discuss the project and used a cocktail party connected to the meeting to buttonhole the director of the flight center about supporting the project. It was through that informal show of political strength, according to the engineer's memory, that the project acquired the credibility within the flight center to attract the engineering support needed for Phase A studies. In another case [IUE], a project quickly acquired in-house credibility because an upper-level administrator was temporarily serving as director of the relevant science division.

The two projects that took advantage of rare astronomical configurations were initially seized on by spacecraft designers at flight centers. In one case [Giotto], the space flight center had nobody with longstanding interest in the science of the project. In the other, while the flight center could make use of interested scientists, the spacecraft designers used the potential for a project to initiate a sub-system development program to bring new technologies into spacecraft engineering. Both projects depended on demonstrating the existence of a scientific community willing to work out and then build a payload of instruments for the spacecraft. When ESA officials have needed assurances that scientists support the plans of ESTEC, the deliberations of the specialized working groups reporting to ESA's Science Advisory Committee have been critical. When NASA officials have needed such assurances, they have looked to the National Academy's Space Studies Board (or the Board's relevant subcommittee) or to one of the advisory boards of external scientists created by NASA's Office of Space Science and Applications[5].

Regardless of whether a project pushed from a flight center originated with scientists with ideas for better measurements or engineers with ideas for getting scientists to a rare natural event, project instigators have cultivated an agency headquarters scientist to promote the project within the agency's budget deliberations. Headquarters scientists, whose sense of accomplishment has depended on successfully promoting projects, have been receptive but discriminating consumers of suggestions. They have consistently used "Working Groups"[6] to judge or refine the outlines of projects. Depending on the size of the project, working group deliberations have been either the principal hurdle for project instigators, or one major step in a longer path to approval.

Three of the four measurement-improving projects we studied in the United States were "Explorer-class," meaning they were eligible for funding from an established line-item in the NASA budget for smaller scientific missions and thus never directly scrutinized by Congress or the Office of Management and Budget. Their promoters at Headquarters were "discipline scientists," who needed the approval of the Associate Administrator for Space Science to place a project in the Explorer queue. A principal hurdle to obtaining that approval has been the discipline scientist's own standing Working Group. Because such working groups have included devotees of experimental techniques of relevance to a discipline, any project featuring one technique faced reflexive opposition from working group members wishing to save slots in the Explorer queue for their own. Thus if a discipline scientist's Working Group subscribed to a project, that was a good indicator of the project's general significance.

One measurement-improving project among our case studies and both projects that took advantage of rare astronomical configurations were expensive enough to require explicit authorization from the full agency (not just the agency office handling science programs) and higher political levels[7]. Their partisans first had to obtain a directorate-level official from the science office to promote the project. In one case, the flight center scientist who was interested in the project initiated a petition among outside scientists to catch the interest of an appropriate official [Giotto]. A directorate-level promoter, in addition to the work needed to convince political authorities of the value of the project, also had to manage the working groups. These working groups have been either standing parts of an agency's structure or groups assembled ad hoc to consider possible payloads for a particular mission. Their success at outlining a payload constituted evidence of broad support for the project in the scientific community.

The reward to the headquarters scientist who succeeded in agency budget politics has been to participate in drafting an Announcements of Opportunity (AO) and then to lead in the selection of proposed experiments. This selection was influenced by the comments of peer reviewers, engineering assessments of the proposals, and the sensibilities of the headquarters scientist's administrative superiors.

Space scientists seem to have accepted the necessity of empowering headquarters scientists to decide among proposals that differed significantly in cost, level of technical risk, or science strategy. There were very few accounts of research scientists who succeeded in attempts to limit the discretionary authority of headquarters by coordinating the production of proposals. In the measurement-improving projects, the principal investigators (PIs) we interviewed usually portrayed themselves as passively awaiting the AO and then enlisting the signatures and advice of co-investigators to the proposals they drafted. The effort to establish the scientific and engineering credibility of a project effectively warned an entire scientific community to prepare for an AO, yet in only one instance [ISEE] did two potential competitors unite in anticipation of the AO in order to create one proposal that increased the chances that both would participate by decreasing the options of the headquarters scientist had for that experiment's slot.

In some projects, working groups drew up "straw-man payloads" to guide the competition for slots on the spacecraft. Scientists viewed participation in such working groups as an opportunity to embed a preferred instrumentation approach into project planning and to build up a team to propose that instrument once the AO was issued. However, we encountered several instances in which such "front-loading" of the competition failed. In two cases the "proto-teams" disintegrated into competitors over scientific or personality conflicts, in two others non-participants in the working groups won space for their instruments in the final payload, and in two others scientists successfully proposed to fly species of instruments that were not included in the initial plans. Our anecdotal impression is that scientists who do not participate in working groups are still viable competitors for the space in a payload.

Both of the projects that originated from outside the flight centers [AMPTE and Einstein] came from institutions whose engineers had previously designed and built satellites under government sponsorship. The importance of engineers to scientist-instigators can be seen in a counter-example. One of the intra-flight center projects we studied [IUE] had its intellectual origin in a twice-failed attempt to initiate a project from outside the flight centers. The project's would-be scientist-instigators, who were not employed at institutions with satellite-designing engineers, had initially proposed a scientific payload that could achieve its desired results only if the rest of the satellite's systems met prohibitively expensive specifications. After the proposal failed, the scientists consulted engineers and produced an integrated proposal that more cost-effectively balanced the burdens on the scientific payload and the rest of the satellite. When agency officials still doubted the project's fiscal feasibility, the instigators peddled their ideas to a flight center that had not been involved in the initial deliberations. Scientists at the new flight center modified the proposal to incorporate planned engineering developments, thereby making the project more scientifically appealing and an excellent challenge for the center's engineering staff. Thus from a failed outsider-instigated proposal emerged a successful flight center supported project.

The principal hurdle to the formation of the two projects instigated with minimal flight center support was the lack of a straightforward route into the agency's formal project-forming procedures for instigators with engineering support from outside the flight centers. In one major respect, the two sets of instigators used similar tactics to impress themselves on agency authorities. The instigators persuaded leaders in relevant experimental techniques to sign onto integrated proposals under a self-invented title other than principal investigator, which was reserved for the instigators themselves. The point, according to one of the recruits, "was to present to NASA something that was too good to turn down ... [viz.] the participation of [all] the major players ... so that there wouldn't be any serious competition from outside." The result was a partial subversion of a headquarters scientist's normal prerogatives. Another recruit thinks of a phone call from an instigator as "the only time we kind of were selected," though the individual added that an agency-run competition would have changed nothing because nobody else at the time had developed an instrument with comparable measuring power. In principle, the agency could have broken up the self-made collaboration and reconstituted the project with multiple PIs; one recruited scientist recalled being advised not to put effort into the instigator's proposal because the agency would never go for a "done deal." An important question for future study is whether measurements can be made of how frequently "done deals" have been proposed and whether their success rate differs from proposals for multi-PI projects.

Otherwise, the two outsider-instigated projects reached acceptance through contrasting political courses and contrasting claims of scientific benefits. One set of instigators [AMPTE] expanded the project's social base by calling theorists' attention to the project's potential to take data that could discriminate among competing theories. (However, the instigators were careful to maintain opportunities for participating scientists to use the project for generic exploration.) The other set [Einstein] expanded its social base by calling outside experimentalists' attention to the project's potential to accommodate guest users with diverse interests. (However, the instigators were careful to reserve operating time for experiment-building scientists to use for their own research.) Instigators of one project [AMPTE] successfully courted a headquarters discipline scientist, but one whose branch lacked the power to do more than keep hope alive with a slow stream of study funds. Only an Announcement of Opportunity from elsewhere in the agency created a niche within which the collaboration could compete for spacecraft- and experiment-construction funds. Instigators for the other project [Einstein] failed to make the appropriate headquarters scientist an early partisan for the project. Headquarters took notice because National Academy meetings called attention to the complementary scientific interests and common engineering needs of the projects of several prominent would-be instigators. That piqued a work-hungry flight center to design a program that fit the several projects. When a directorate-level advisory committee endorsed the program, the agency made it a major initiative.

Developments in headquarters beyond the influence of project instigators buffeted both these projects, but with opposite effects. In one case, delays in funding brought the project a larger-than-anticipated launch rocket, which enabled the scientist-instigators to create a larger project than originally planned. In the other, severe cutbacks in the budget for the program obliged the scientists to endure conflictual discussions over how best to "descope" themselves before the engineers at headquarters did it for them. In neither case did the conflicts lead to forced or voluntary resignations from the projects, but these cases do demonstrate the importance of intra-agency conditions and policies for social relations within space science projects.

We suggest the possibility of constructing a spectrum of project types and correlating the types with the social origins of the projects (at least for projects that first formed circa 1970 and were launched circa 1980)[8]. The project types range from community-reforming at one extreme to community-affirming in the middle to community-creating at the other extreme.

Community-reforming projects are represented by a project [AMPTE] that mobilized an extant community of experimenters by directing their attention to a particular scientific issue (while making possible other measurements compatible with the central work). The community-affirming projects are represented by projects [IUE, ISEE, Voyager, Giotto] that provided a better vantage point for an extant community to measure natural objects or environments of longstanding interest. Community-creating projects are represented by a project [Einstein] that developed a family of measuring techniques and shared their use with scientists the instigators believed should become supporters of their techniques. The community-affirming projects originated in flight centers, whose scientists made it their business to find advantageous spacecraft configurations or combinations for their peers, or whose engineers tapped scientists' enthusiasm for using rare astronomical configurations to create spacecraft-designing opportunities for their flight centers. The two extremes are outsiders' projects, whose instigators used "scientific charisma" to organize scientists into a single-PI structure in a way that would have seemed an excessive exercise of governmental power if attempted by flight center scientists.

[Table of Contents]
NASA Headquarters has imposed a formal structure on space science projects. Program managers, engineers by training, at NASA Headquarters have overseen project managers, also engineers by training, at NASA space flight centers. Project managers have overseen the design, construction and integration of spacecraft, including their payloads of scientific instruments, by employing some combination of in-house engineers, external industrial contractors, and in-house and external PIs. The PIs, scientists by training, have designed and built scientific instruments using some combination of staff engineers or scientists, external contractors, and co-investigators. A project scientist, typically a PI and an employee of the space flight center, has been responsible for advising the project manager on spacecraft engineering options that could affect the project's scientific capabilities and for keeping the other PIs informed of spacecraft engineering developments. To discuss collective scientific concerns and resolve them, the project scientist has led meetings of a "Science Working Group" (SWG). The precise membership of the SWG has varied across projects but has always included all the PIs and select members of their teams. The project scientist has also reported to a program scientist at NASA Headquarters, who has been able to bring scientists' concerns to the program manager or their mutual superiors within the Office of Space Sciences and Applications (OSSA).

These arrangements have attempted to manage an intrinsic tension in the concept of space science projects: which is the more difficult and significant challenge—sending and operating equipment in space, or satisfying criteria of scientific value? Space projects, whether pursued for science, national security, international prestige, or commercial advantage, have had common problems of design and operations; vesting managerial authority for space science projects in engineers well versed in spacecraft has placed scientists under a discipline of useful expertise that has often not been part of their own professional training. However, science projects, whether pursued in space, the natural earth environment, or the laboratory, have been valuable only if they yielded new or improved data; providing scientists with their own line of communication to OSSA's higher authorities has reminded engineers that they must serve as well as manage the PIs. In space science projects, NASA's answer to the science policy question of whether scientists should be on top or on tap has been "some of each."

The multiplication of lines of power built into the formal structure of space science projects has insured that even projects that fit well into that structure will vary significantly. A fortiori, projects that grafted the formal structure onto a structure of their own making created even more variance. Instances of both kinds of variation are well represented in the projects we studied. Nevertheless, the formal structure, whatever its defects as a description for how the projects have been organized, does create consistent terminology for identifiable elements of space science projects. We will accept the terminology and work towards an assessment of the character of these elements and their relations to other aspects of the projects.

A. The Scope of the Science Working Groups
[Table of Contents]
Science Working Groups in our sample varied in how much business they handled. Scientists appear to have been torn between limiting the scope of the SWG, and thus maximizing their autonomy from each other, and expanding the scope of the SWG, and thus maximizing their unity in dealing with project engineers and outside scientists. The policy of scientists in any particular project depended on the project's origins and the way in which the project's participants joined up. Projects that originated within the space flight centers and that were staffed with PIs chosen through a competition organized by NASA Headquarters had relatively circumscribed SWGs in comparison to projects that originated in outside laboratories and that were staffed with scientists recruited by the projects' scientific instigators.

In the most extreme of our case projects [IUE], the SWG was initially ceremonial. Its members were not PIs in the sense of experiment builders, because the flight centers planned to build the spacecraft entirely on their own for outsider users; they were scientists who had submitted early proposals to use the spacecraft once completed. This SWG's function, even in the eyes of its members, was to demonstrate the existence of community support for the project outside of the NASA flight center. Its advice "rubber-stamped" what the flight center scientists and engineers had decided was the proper course. The SWG's efforts became more meaningful as the project grappled with observatory operations and data processing. But nobody pointed to the SWG as an important body for determining the character of the observatory or the science it made possible.

More commonly, the SWG restricted itself to dealing with collective issues that were engineered into the project's initial design. In all projects (except the one discussed in the preceding paragraph) in which the instruments, upon integration and testing in the spacecraft, disturbed each other, the SWG provided a forum for working out procedures to minimize interference in operations. However, special steps were sometimes instituted in projects with magnetometers, which are highly sensitive to electromagnetism. The cases we studied included one [ISEE] in which the SWG took on the issue of minimizing the spacecraft's electromagnetic noise, and one [Giotto] where the magnetometer's PI had to work out problems individually with the builders of systems that could create noise. For planetary projects, the flight path of a project's spacecraft(s) was the outstanding example of an SWG issue. Because a spacecraft's course could determine what scientists could measure, spacecraft trajectories and orientations were always subjects of SWG discussion. For astrophysical projects, initial selection of objects for observation was the outstanding example of an SWG issue. Because irreparable damage to the observatory could occur at any time, the SWG had to decide which uses would leave the most important legacy in the event of early failure. Collective issues, though limited, could still be sufficiently important to prove taxing. For example, one project developed a system of sub-committees to the SWG to produce first cuts at trajectory and orientation issues prior to their consideration by the full SWG; yet a PI left this project partly out of an inability to elicit support for spacecraft orientations that favored the observations he wished to make.

The SWGs for the two projects that originated outside flight centers were the only SWGs in our sample that expanded their scope to include more than the collective issues created by the project's basic engineering. Because of the leadership of the scientist-instigators, the terms of their recruiting pitches, or the sense that scientists could choose their collaborators, the members of these SWGs were either more willing to put collective achievements on a par with individual achievements, or they were more insistent on controlling engineering judgements that could affect the instruments' capabilities. In one case [Einstein], the SWG oversaw integration of a unified science payload, which it then delivered to the flight center for assembly into the spacecraft. To insure that the participating scientists would take a constructive interest in one another's technical problems, they guaranteed each other limited rights to use one another's instruments and painstakingly negotiated among themselves a division of the scientific topics that each would initially address. In the other case [AMPTE], the SWG coordinated the operations of the several experiments at selected times and pooled the intercalibrated data streams in order to obtain as comprehensive a view as possible of important events. To insure that participating scientists had opportunities for individual successes, experiment builders also had ample time to run their individual experiments as each saw fit.

Even the self-organized projects that originated outside the flight centers never placed all areas of science activity within their SWGs' jurisdictions. Experiment builders almost always cared principally about the spacecraft's capabilities and their individual interfaces to it rather than the capabilities and designs of other experiments. Individual teams decided when and where to disseminate their findings, except when the projects arranged to have a special journal issue or conference session dedicated to the project. Individual teams decided the content of journal articles and conference talks. When scientists within a project reached different conclusions about the same topic, they almost always disseminated their views individually without attempting to reach an intra-project consensus.

B. The Scope of Flight Center Officials
[Table of Contents]
In every project, the flight center project manager was responsible for the project's money and schedule. That made the project manager the ultimate consumer of technical information from contractors, and usually the most powerful individual in the project during its design and construction. However, the precise scope of project managers' powers has been influenced by how far they were their involved in the formation of the project. Project managers were not officially designated until the agency was ready to commit funds for detailed design work (upon the project moving from Phase A to B); and in half of our cases, the project manager had not participated in the engineering studies that had established the project's feasibility and likely costs. When assuming management of projects whose scientists had already decided on the terms of their relationships, project managers have found their authority bounded by these pre-existing relationships.

Project managers who had participated in the early engineering studies usually imposed their flight center's customs for reporting requirements on the PIs and for the organization of flight center staff on the project. So did managers of projects whose scientists had not been on board from the outset. Because of accumulated traditions and experiences, the flight center customs were well understood by project managers and PIs[9], even when the PIs resented the flight center's culture or the project manager's style. Many issues were resolved in communiques between PIs or their engineers and the project manager or a flight center staff member the project manager assigned to track science payloads. (Such a staff member was variously titled "instruments manager" or "payload specialist.") To deal with remaining issues, the project manager almost always attended SWG meetings.

The effectiveness of this arrangement is evident both in the absence of hostile memories and in the recognition scientists occasionally gave to flight center staff members. In one of our cases [Voyager], where the project manager assigned separate staff members to track individual experiments, in at least two instances, experiment teams ended up making space on their teams for their liaisons. When PIs resented project managers' decisions over administration or over how to distribute mass, electric power, telemetry rights, or other limited resources, among the spacecraft systems, the PIs protested in ways that did not elevate the dispute to Headquarters, and ended up unhappily deferring to the project manager. One project manager recalled receiving an "earful" from the PIs about how cumbersome they had found the functional divisions into which the manager had divided the project. However, the PIs did not speak out at the peak of their difficulties but just before launch, when some of those divisions and the project manager himself were about to diminish in importance. In two separate projects [Voyager and Giotto], interviewees recalled the project manager ordering changes in an experiment's hardware. Even though the PI in one case felt the benefit of the change to the project manager was out of proportion with the difficulties imposed on the experiment team, in neither case was there any mention of discussion in the SWG or mediation by the project scientist[10].

Project managers had the most difficulty on the two projects that scientists outside flight centers had organized themselves, regardless of whether or not the scientists found the flight center's culture hospitable. In one case, the scientists expanded the SWG's scope to the integration of a total science payload, intruding on areas that were otherwise within the project manager's jurisdiction. The scientists deliberately excluded project management from some of their meetings, and flight center project staff felt misled on cost and progress of parts of the payload; scientists admit that to protect their autonomy they were less than forthcoming about problems. In the end, relations between scientists and project management were so poor that the scientists were unable to elicit the release of a marginal amount of money for hardware that could have significantly extended the life and productivity of the spacecraft. In the other project, the unity the scientists achieved by planning coordinated experiment operations enabled them to impose their will on the project manager. When a change in launch vehicle increased the mass the project could launch, the project manager would have preferred incremental increases to the payload, but the scientists rallied behind adding a fully instrumented sub-satellite that added complexity to the interfaces of the original mission. The project manager relented, but his relations with the principal scientists turned "polite but tense."

During mission design and construction, the needs of the project manager consistently determined the scope of the project scientist's work. When the SWG dealt with collective science issues within the planned boundaries of resources [ISEE and Giotto], the project manager needed the project scientist for guidance on when engineering expediency in design or construction could upset the scientists' planning. To keep control of the project, the project manager just had to avoid pushing the scientists to activate their independent access to headquarters officials. When the SWG incubated conflicting ambitions that the spacecraft could not handle [Voyager], the project manager needed the project scientist to adjudicate conflicts among the scientists and mediate between the scientists and project management. In these situations, a project scientist needed to be a trusted authority on the scientific merits of the several experiments. In the two cases when the scientists collectively enlarged the responsibilities of the SWG to intrude on the project manager's domain, they did so under the leadership of a lone PI who spoke for the collective science interest. In both these situations, the project manager's diminished powers led to a diminished role for the project scientist, because there were fewer matters on which the project manager needed a scientist's advice, and because the lone PI overshadowed the project science in stature within the project.

After the launch, project scientists administered project funds for data analyses and fielded proposals from members of science teams pursuing longer-term research on their data sets. Once funding for the project ceased, science teams had to obtain funding for analyses in the general competition for NASA program grants.

C. Coordination Among Flight Centers
[Table of Contents]
The cases we studied included three international, multi-flight center projects: two multi-spacecraft projects in which one spacecraft was built at each flight center, and one single-spacecraft project in which the flight centers each built part of the spacecraft. The multi-spacecraft projects were consciously organized to minimize inter-flight center engineering interfaces, to maximize the project managers' individual and collective latitude, and to leave coordination of the project's greater-than-national capabilities to post-launch operations. In one case [ISEE], the project managers maximized autonomy for the two centers by cultivating each other personally and paying careful attention to the interface between the spacecraft. In the other [AMPTE], more lower level contacts among the spacecraft engineers were relied on to achieve compatible spacecraft.

In both these projects, the SWG operated as an international body. In one, SWG meetings were held on both sides of the Atlantic with the host's project scientist chairing the meeting; in the other, the lone PIs for the spacecraft jointly chaired the meetings. In both cases, the SWG decided how and when to operate the spacecraft in a coordinated fashion.

The organizers were not able to divide labor so cleanly in the one-spacecraft project involving multiple flight centers. Two of the three participating centers built components whose design and interfaces directly affected each other[11]. This division enabled the project to take advantage of the centers' complementary strengths (one center had experimented with a new technology that was vital to one component but lacked the resources to build the related components, while the other center lacked experience in the new technology but had the larger staff and budget). Because the components had to be integrated, scheduling appears to have caused more stress in this project than others, both centers straining to avoid being a bottleneck to progress. Representatives of the two parties used frequent telephone calls, attended each other's design reviews, and stationed personnel at each other's shops to insure consistency in design. These practices worked; for example, one center successfully recommended a design change to the other's component in order to accommodate unexpected idiosyncracies in the first center's component. Ironically, this technical intimacy was followed by independent data-acquisition strategies. The centers divided responsibility for spacecraft operations by granting each blocks of time for its own use. However, scientists from each center have occasionally collaborated as individuals in order to create larger chunks of operating time than any could obtain on their own.

Differences in national forms of organization and culture have been apparent to interviewees in these projects, but they never proved a stumbling block to coordination. In two instances, European scientists rather than engineers served as project managers, but their American counterparts recounted no cultural obstacles to communication. In one instance, a European nation designated an original instigating scientist the "project director" in addition to designating a project manager and project scientist, but that reflected well-understood idiosyncracies in the formation of the project and never confused other collaborators about where to go for information.

Europeans have noted that NASA space flight centers, which have routinely assembled and integrated spacecraft, have larger staffs than either ESTEC (which always has contracted out spacecraft construction) or the European research institutes (which have infrequently built spacecraft). Some have noted—sometimes in admiration and sometimes in frustration—that American flight centers seemed to respond to difficulties by holding open brainstorming sessions that drew in their large staffs with their varied specialties, rather than adding a few more people to the project.

Undoubtedly the most significant organizational differences are the obvious ones: ESTEC project managers have never controlled the funding for experiments while NASA project managers have; and ESTEC project scientists have never been PIs while NASA project scientists have. How one could document any effects of this difference is unclear. Perhaps ESTEC project managers have felt broader license to impose technical burdens on PIs, because the managers do not have to fund the work to meet those burdens and because project scientists, not being PIs, lack the status to oppose them.

D. The Scope of NASA Headquarters Officials
[Table of Contents]
Once Headquarters had selected a flight center, selected the PIs, and initiated the flow of money for a project, its officials lost most, but not all ability to exert daily influence over a project. Whether they continued to be active in a project depended on the project's budget and the intensity of conflict between scientists and project management. When a project was expensive for its time, or when conflict within the project was sufficiently intense, Headquarters officials were influential.

Project managers always wanted headquarters program managers to limit themselves to handling the project's external relations with the rest of Headquarters and the political institutions that oversee the agency. In order not to excite suspicions that projects harbored hidden problems, project managers routinely invited program managers to project staff meetings. This tactic worked for conflict-free projects that were part of the low-budget Explorer program. To both scientists and engineers in these projects, it was flight center administrators who were the powerful officials to worry about in the event of technical problems.

In the two projects that were too expensive for the Explorer program, headquarters program managers were more active. In one case, the program manager accepted invitations to design meetings and, to the dismay of the project manager, argued technical points with the project manager and participating scientists. In the other case, the program manager telephoned the project manager so frequently that the program manager felt all major issues were resolved verbally before plans were committed to writing. Neither of these program managers managed one of the Explorer projects we studied, so we have no evidence on whether their activism was a function of project costs or personal styles. But we would guess that when projects have been expensive enough to attract political attention, program managers, who must represent their projects to higher authorities, have needed to participate in project decision-making in order to feel comfortable with their duties.

Program scientists always became significant when participating scientists and project managers could not resolve their conflicts. In our cases, such conflicts only occurred in projects that originated outside the flight centers. In one case [AMPTE], the program scientist had been the headquarters scientist who had supported the scientists' ambitions to form a project. He helped the scientists obtain permission to attempt centralized processing of data, even though the project manager doubted the feasibility of the operation within the project's schedule and budget. In the other case [Einstein], the program scientist had assumed his duties well after the project had been formed and did not wish to see a tradition of self-formed projects become established. When the scientists took to Headquarters their dispute with the project manager over spending additional modest money for mission-extending hardware, they were denied the funds.

[Table of Contents]
"Experiment" in the terminology of space science has referred to the design, construction and operation of an instrument plus processing and interpreting the signals the instrument returns. For purposes of design and construction, an instrument was often broken down into self-contained "boxes," whose mechanical interfaces were cleanly and simply specified at the start of the project and whose digital interfaces could be worked out over the course of construction. In this section, we will use "principal investigator" (PI) to mean the scientist in charge of an experiment, whether or not that was the title used in the project. Other team members with independent standing as scientists usually held the title "co-investigator." The significance of that title, as will be seen, has varied.

A. Origin of Space Experiments
[Table of Contents]
Space-based experimentation has involved many intrinsic technical difficulties not encountered in laboratory work. Instrumentation has had to be light enough to reach its destination on the rockets of the day and mechanically strong enough to survive the vibrations of launch; it has had to operate on the electrical power the spacecraft has provided, keep operating without hardware repairs for a worthy amount of time amidst the environmental hazards of space, and read out electronically so that collected data can be telemetered back to Earth. Experimentalists in space science have routinely employed two strategies for coping with these technical facts of life. First, they have specialized in the design and construction of particular types of instrumentation; once they have been able to "space qualify" an instrument, rarely, if ever, do they even consider diversifying into a new area of instrumentation because of the competition they would face from established specialists. Second, they have relied on commercially available components and industrial expertise in building their instruments. Because many of their technical problems—e.g., detecting faint radiation signals and reducing the weight of equipment—have been generically equivalent to the military's technical problems, space scientists have often been able to adopt or adapt what commercial manufacturers have researched and developed for military use.

Of 27 experiments for which one or more interviewees addressed the relationship of an instrument to earlier work, 23 were described as updated or modified versions of what the PIs or their mentors had flown on a previous mission or designed for a previous mission. One experimenter spoke for many in pointing out that while the basic design of his instruments has remained constant, the detector elements and logic circuits he buys keep getting better. With better detector elements, experimenters obtain information on broader, more continuous bands of radiation frequencies or particle energies. With better logic circuits experimenters can operate their instruments in more modes and obtain more information from their allotted telemetry. Experimenters also occasionally pointed to the availability of exceptionally lightweight materials as essential to making an older design viable in a new mission.

The military context in which the parts and materials of space instrumentation originated has not noticeably hindered space scientists. In only one instance did an interviewee need a low-level security clearance to work with a component. Even with the clearance, he felt he lacked ready access to the information needed to understand the part's internal workings; however, instead of handicapping his work, the restrictions may have helped him to discover that the component could be reliably used without knowing precisely how it worked. In all other instances, space experimenters simply began buying components as their manufacturers obtained clearance to market them, or experimenters contracted with the firms the military used in order to acquire materials or components.

In the rare instances when space scientists attempted to develop technical novelties, they still did not consider the new device suitable to incorporate into a space mission unless an industrial firm took up its manufacture. Even then, difficulties in manufacturing a device could discourage a firm or cause a loss of confidence among other participants in the project. Such problems were cause for helpful intervention by a flight center, which recognized that the viability of a science program depended on commercial suppliers of specialized components.

Scientists interested in carving out a niche for themselves in space experimentation consciously looked for laboratory set-ups they thought could be adapted for use in space. The experiments covered in our cases included three claims of "space firsts" (at least for unmanned space missions)[12] for a particular instrumental technique and one instance where a team sought to create an order-of-magnitude improvement on what another team had previously flown. Three of these four cases nearly ended in disaster. In one case the PI did not realize that some part of his instrument was outgassing material that could distort readings until the instrument was being integrated into the spacecraft; the PI had no chance to change the hardware but fortuitously found a way to minimize the effect. In another case, flight center officials considered mothballing the project because an experiment team's contractor could not manufacture components that equalled the performance of the team's prototype; eventually the contractor was replaced and the project stayed close enough to schedule by using the prototype for testing and integration with the rest of the spacecraft. In a third case, the inexperienced team was so late in delivering the instrument that it was not calibrated as carefully as planned and its effects on other instruments were not checked as fully as other PIs desired; the scientists learned post-launch how to assess and compensate for their effects on each other. Such anecdotes suggest why space experimentalists work within an instrumentation niche rather than venture frequently into new technical areas.

B. Organization of Experiment Teams
[Table of Contents]
Experiment teams have usually had a center-periphery structure. At the center has been a small number of institutions—sometimes just one—overseeing hardware development and basic data-processing software. On the periphery have been several co-investigators, often from other institutions, providing additional expertise in the science analysis of the data. In this manner, work on the many technical problems of space-based instrumentation have been efficiently centralized without shutting out scientists with lesser instrumentation skills or resources, and without wasting data on experimentalists unaware of all the ways the data could be used.

In the projects that were entirely managed in NASA flight centers, one institution almost always oversaw the development of each instrument. PIs struggling to adapt a laboratory instrument for space use sometimes elicited help from the original laboratory developers on an ad hoc basis. Co-investigators never influenced the technical development of an experiment; they were chiefly of symbolic importance, demonstrating the existence of outsiders' confidence in the scientific value of a proposed experiment. In one case, a postdoctoral scientist consciously shifted from a co-investigator's to the PI's institution in search of technical intimacy with the instrument; the postdoc recalled that the PI unilaterally decided to use recently developed detector components that had been discussed only in the proposal's appendix. In another case, co-investigators who had been added to a team at the insistence of NASA Headquarters were unable to budge the PI from building an instrument that was higher in sensitivity and lower in resolution than seemed appropriate for the bulk of the mission.

By contrast, interviewees from more than half of the experiment teams on international projects (whether between NASA and ESA, NASA and the space agencies of individual European nations, or among European nations contributing to an ESA project) reported that their teams divided responsibilities for the development of each individual instrument among institutions from different nations. The leading scientists from each institution had to agree on who among them would be the PI. Often political expediency made one scientist the obvious choice—e.g., an experiment team consisting of an American and European institution would shift the title and duties of PI depending on whose agency was soliciting proposals for the kind of instrument the team made. The other scientists whose institutions would contribute part of the experiment were designated co-investigators. Thus the term "co-investigator" was ambiguous in these missions; it could refer to scientists who contributed part of the instrument or to scientists who boosted the experiment's scientific breadth and credibility but were marginal to the experiment's design and construction.

In experiments with multi-institutional contributions to design and construction, the PI had to decide on the allocation of the experiment's spacecraft resources among the instrument's components and was responsible for keeping the several parts compatible. PIs varied temperamentally in whether they preferred to build consensus or make quick personal decisions over allocation issues. In many cases, the institutions of an experiment team had previously worked together, trusted each other, and built the experiment out of self-contained boxes that interfaced digitally but with minimal mechanical complexity. Even the scientists that were working together for the first time reported nothing particularly taxing in their social arrangements (with one exception, in which the PI was a technically inexperienced theorist whose administrative superior was unsympathetic to his experiment). The overwhelming judgment among scientists in multi-national experiment teams was that the benefits of dividing costs across governments and dividing labor among institutions with complementary strengths far outweighed the extra time and money spent insuring that independently built parts would work well together.

C. Organization of Data Acquisition and Analysis
[Table of Contents]
In all the cases we studied, the engineering of the mission forced participating scientists to work out a more or less elaborate policy for acquiring data. In two cases [IUE and Einstein], instruments could only be used in series and a schedule had to be worked out; in two others [ISEE and AMPTE], the operations of multiple spacecraft and at least some of their experiments needed to be coordinated; in another [Giotto], telemetry formats and spacecraft trajectory needed to be adjusted to suit particular experiments at particular times; in another [Voyager], several instruments were mechanically constrained to operate in unison and a schedule had to be worked out. In all but the last case, experiment teams were unified in their desires and data-acquisition strategies were set in the SWG. In the last case, experiment teams included co-investigators with diverse scientific interests and conflicting data-acquisition strategies. Initially, the PIs were obliged to represent their teams and to learn where divisions in one team coincided with those in another. But over time the project formed cross-team interest groups to discuss how best to share data-acquisition capabilities, and the SWG only dealt with conflicts that the interest groups did not resolve.

Once data were collected, experiments and projects differed in the extent to which they processed and distributed their data. At one extreme were two projects [IUE and Einstein] that standardized processing with the goal of creating data that outside users could confidently analyze and interpret. At the other extreme were two projects [Voyager and especially Giotto] in which each PI controlled a team's data with minimal contractual requirements, few intellectual benefits, and even fewer social incentives to prepare the data for distribution outside the team. Towards the first extreme was one project [AMPTE] that eliminated proprietary rights to data within the project by processing data streams in parallel for collective consideration. Towards the second extreme was a project [ISEE] that required experimenters to distribute their "rough numbers" within the project with the understanding that anyone intrigued by something in another's data would check with the relevant PI before proceeding towards publication.

The three projects that treated data as relatively public property included the two that formed outside flight center auspices. The scientists' ambitions in these community-reforming or community-creating projects required, in the first case, that they collectively document the phenomena they believed should attract attention, and in the second case, that they enable many scientists to convince themselves of the utility of the project's family of techniques. The community-affirming projects that formed within the flight centers usually aggregated scientists' individual ambitions for improved measurements of phenomena of known interest. In these cases, project scientists took a laissez-faire stance towards how PIs shared their data.

Data collectors usually had formal proprietary rights to their data for a defined period (usually one year) and then had to turn over the data to an archive. Still, PIs were often treated by other scientists as the owner of the data for considerably longer periods. (In the projects with standardized processing, data collectors were often not PIs, in the sense of experiment builders, and were less likely to be viewed as owners of archived data.)

Typically, PIs made their experiments' data fully available to all their co-investigators. In the few cases where PIs were hard pressed to produce the hardware or basic data processing software, they thought their co-investigators were better positioned than themselves to take the lead in producing science analyses. Teams varied in whether they regulated who worked on particular topics that could be addressed through the data. Europeans seemed more prone to implement a division of topics.

The overwhelming consensus of interviewees was that experiment teams have striven for, and occasionally achieved, self-sufficiency in their ability to perform scientific analyses. Some see this condition as driven by career and political structures—experimenters need distinctive results to justify the public support they received and to stand out in the competition to participate in future projects. Others see self-sufficiency as driven by technical and cognitive realities—the amount of work an experimenter invests in building an instrument and processing its data into meaningful physical units precludes working with others' data. But in general, projects left PIs to dispose of their data during proprietary periods, and provided neither technical nor moral supports nor impediments to inter-team data sharing or analyses.

Experiment teams, however, were frequently unable to achieve the ideal of scientific independence. In most cases, teams found their way more or less easily to exchanges of processed data, with the understanding that borrowing scientists would have the lending PI check the borrower's work before the borrower disseminated results based on the loaned data. Such exchanges were more easily reached when, for example, members of two teams felt equally dependent on one another; when a borrower's scientific interests required a less refined version of the lender's data than the lender wished to work on; and when a PI owed his spot on the spacecraft to other scientists' success at expanding the payload.

The ease with which errors can slip into data analyses and the importance of having a PI check a borrower's work was stressed by several interviewees. Two of them reported instances in which their mistakes in the use of data were caught by a PI or an instrument operator. One member of an experiment team reported publishing an erroneous analysis of the team's data because of a failure to recognize that a recalibration of the instrument did not leave the instrument trustworthy in all ranges of interest. A member of a different team reported that outside scientists interested in the team's data had the original team process the data for the outsiders rather than work from archived data.

In a few instances, experimenters did become interested in each other's raw data. In one project that originated outside the flight centers [AMPTE], experimenters committed themselves, for particular periods, to collective assessments of their data streams, and consequently had to inter-calibrate their instruments. Such an arrangement was so rare and the confidence of the several PIs in their instruments was so high that one experimenter joked that the project violated "a rule of experimental physics that you should never measure anything more than once, because if you do, you'll get two different answers—and we quite deliberately had overlaps." Differences among the instruments' readings were narrow enough and relationships among the experimenters strong enough that the feared disputation over whose experiment was truly reliable never materialized. By contrast, in a project that originated within a flight center [Giotto], a team took to digitizing the published data of another in order to be able to work with both teams' data. In only one instance, involving a project with standardized data processing, did researchers reprocess raw data from an instrument they had not built. When they used their programs not only on the data and topics they were initially granted as co-investigators, but later on data and topics their collaborators had worked on, their work spawned an intense, open scientific controversy over the quality and interpretation of the two groups' findings.

D. Dissemination of Results
[Table of Contents]
The title "principal investigator" conveys responsibility for the quality and publication of data. In the four projects in which multiple PIs built their own instruments or took their own observations, the PIs judged the content of their teams' publications independently of each other and decided on the time and place of journal submissions. (An exception to this generalization has been the journal issue dedicated to a project's results; in these cases, the project determined time and place of submissions, but the PIs still independently assessed their teams' contributions.) In the two with single PIs, only once did any participating scientist report that a PI tried to use the titular position to demand the right to approve a paper before publication. The participating scientist felt he secured his rightful freedom to publish after threatening to publicize their argument.

Experiment teams in space science have involved few enough scientists that they have not needed formal procedures to regulate the production of scientific papers. Fear of errors in analysis and common courtesy have insured that when writers work with more than one data set, they elicit and recognize contributions from within their team and from other teams. Tolerance of open differences in interpretation of data have allowed writers to publish papers without achieving consensus among all scientists who contributed to the data[13].

[Table of Contents]
In the United States, NASA has been the sole supporter of space science. NASA's Office of Space Science (and Applications) gives this in two ways: through grants programs, which discipline scientists use to support analyses of extant data and to support research and development into instruments for taking data, and through contract programs, which program managers use to support the construction of spacecraft and their payloads of scientific instruments. In Europe, the several nations have their individual means for supporting research and development. ESA's Science Directorate has contract programs to support construction of spacecraft for science projects. In both cases, the separation of support for research and development into instrumentation from the support of instrument construction for space projects has encouraged technical specialization and conservatism in projects. With their nations' support for instrument research and development, PIs have been expected to figure out how to "space-qualify" an instrument that could outperform predecessors or measure new parameters. Only then could they reasonably hope for success with a proposal to build an appropriately tailored version for a particular project.

The scientists and engineers who eventually absorbed project funds were rarely privy to decisions that compared the value of science projects serving different areas of science, and none of those we interviewed spoke of any involvement in comparisons of the value of science to other space activities. Project instigators were active in the politics of funding at the level of addressing panels and working groups, whether organized by a national academy or a space agency, in the hopes of establishing that their projects best advanced their field of science. Once that groundwork had been laid, even the best connected of project instigators appear to have been excluded from higher policy discussions. In NASA Headquarters, "branch chiefs" or "discipline scientists" (the title has changed over time) pass their recommendations upward to division chiefs, who cover broader areas of science. Division chiefs report to the Associate Administrator for Science. Each individual in the chain has a working group of scientists to help consider programmatic possibilities; these groups are apparently becoming generators of ideas and plans as well as reviewers of the ideas and plans of others.

A relatively expensive project will be funded only if it is part of a program that has won a dedicated line item in the political negotiations over the NASA budget. Otherwise, a project must be inexpensive enough to fit into the Explorer Program[14]. Project instigators have so badly wanted to fit into the Explorer Program, in order not to have to face the uncertainties of higher political reviews, that they have knowingly resorted to dubious accounting practices. Inclusion in the Explorer Program did, however, come at the cost of ongoing competition with other Explorer projects for the funds that Congress appropriated in any given year.

The ESA Science Directorate has resembled an enlarged Explorer Program. With its budget set on a five-year basis, the ESA Science Directorate does not face a political hurdle equivalent to acquiring a new budget line every time space scientists unite behind a major project. European scientists' and engineers' ideas for projects are reviewed by an astronomy or solar system working group. Surviving plans go to the Space Science Advisory Committee, whose findings are rarely reversed by ESA's Science Policy Committee, a decision-making body that has included a representative from each member nation. As with projects in NASA's Explorer Program, ESA's projects influenced one another's schedules and budgets.

The funding patterns and accountability systems of NASA and ESA have diverged after a project manager has been appointed. Depending on the demands of the project and the supply of resources at the flight center, NASA project managers have had the option of contracting out construction and integration of the spacecraft to an aerospace firm or directly overseeing assembly and integration in-house. ESA project managers have always had to contract out for spacecraft construction and integration. The NASA project manager's budget has had funds for construction of both the spacecraft and its payload of scientific instruments (though occasionally another US government agency or a foreign nation has contributed funds for part of the science payload). The ESA project manager's budget has had funds only for construction of the spacecraft; scientists building instruments for the payload have obtained their funds from their national governments. European scientists, even from the same nation, differed on the severity of the national reviews for the funding of experiments: some thought the review pro-forma given ESA's endorsement of the experiment; others prepared carefully for the review to show respect for national administrators and to guard against any unexpected squeezes on national funds; and some considered the review a major hurdle given the nation's scientific traditions and priorities.

A two-by-two matrix can be constructed for accountability relationships in space projects. Project managers can both oversee construction in-house and manage funds for scientific instruments; or they can build in-house and not fully control instrument funding; or they can build out-of-house and control instrument funding; or they can build out-of-house and not control instrument funding. ESA project managers always fell in the fourth category, and NASA project managers predominantly fell in the first or third. But some NASA project managers have also had a foot in the second or fourth categories when one or more of a project's experiments came from European scientists or from American scientists with non-NASA funding. In practice, none of the NASA projects we studied put project managers entirely in the fourth category.

Whenever project managers had the spacecraft built out-of-house, they wanted communication between scientists and the prime contractor to flow through them. Managers feared that unregulated technical discussions between scientists and contractor personnel could undermine their control of budget and schedule by spawning plans or expectations among people over whom they lacked authority. The fourth-category ESA projects have probably been the most structurally contentious, since the project manager and PIs have drawn their funds from different sources and thus have no direct incentive for keeping each other's costs down. In the case we studied [Giotto], the project manager did impose noteworthy technical burdens on some experiments and prevented back-channel communication between experimenters and spacecraft builders. But the project that seems to have induced the most enduring bitterness involved a third-category NASA project in which scientists struck a deal with the spacecraft builder for extra hardware, only to see the project manager refuse to allocate the funds.

Projects in which the project manager both oversaw assembly in-house and managed the funding of experiments had the least structural conflict. Scientists had less to gain from back-channel communication with a contractor that was providing a particular spacecraft component than from a contractor building the entire spacecraft. Project managers had less to gain from insisting that scientists meet more stringent engineering requirements because the project managers had to fund the extra work involved.

[Table of Contents]
Space science has been international at two levels. Projects have combined the efforts of flight centers and scientists in multiple nations. And experiments have been built by multi-national teams of scientists and engineers. Most interestingly, only multi-national experiments, not projects, have been a crucible for the creation of enduring working relationships.

A. Internationalism in Projects
[Table of Contents]
Including the ESA project, which was intrinsically international, four of the six projects we studied were organized on a multi-national basis. Four forces were responsible for making projects international: the desire to combine technical specialties that had become better developed in different nations; the desire to broaden the base of scientists competing to participate in a project; the desire to spread the costs of a project across governments; and the desire to use a quasi-diplomatic agreement to make projects more difficult to cancel. No single force entirely justified the internationalization of any individual project, but different forces were more important in different types of projects.

Two of the four international projects we studied were community-affirming and formed with the active support of flight centers. In both these cases, broadening the scientific base and increasing the commitments of governments were the predominant forces for internationalization. In a NASA-ESA project [ISEE], a European scientist recalled that Europeans and Americans had been independently contemplating similar two-satellite missions. When they became aware of each other's thinking, they saw obvious benefits in cooperating as equals with each agency building one spacecraft: a better scientific payload could be formed from an international competition for instrumentation slots; and the international agreement would make the project less vulnerable to cutbacks in case NASA encountered budgetary pressures. In an ESA project [Giotto], a core of largely German instigators felt they needed more prospective participants from other nations to boost the project's viability within ESA's political structure. The campaign to broaden international participation netted worthy proposals for instruments than had not been in the straw-man payload and significantly increased the project's scientific breadth.

The desirability of spreading costs across governments and of coordinating technical specialties better developed in different nations were relatively unimportant in international, community-affirming projects. The NASA-ESA mission was pursued under the Explorer program on the NASA side; its American advocates may have preferred to keep their needs within the Explorer umbrella, but they could have aimed for an entirely national program with its own line in the NASA budget. The project manager of the ESA mission felt secure enough with the budget to reject a NASA offer of operations and launch support in exchange for a guaranteed spot for an American experiment in the payload. In both of these projects, the participating scientists designed their experiments as independently from one another as spacecraft engineering realities allowed. While agency headquarters selected experiments that complemented one another, neither headquarters nor flight center officials tried to coordinate the PIs' efforts to analyze their data and disseminate their findings.

By contrast, an international community-reforming project [AMPTE] and an international project that bordered on community-creating [IUE] became international primarily to coordinate expertise that was unevenly spread among nations. The former project used research institutes of different nations to build multiple spacecraft to serve complementary functions that the collaboration coordinated in operations. The latter used one nation to build the bulk of the science payload, but delegated responsibility for the most innovative component to another nation, because the other nation had pioneered in developing a technique essential to the component. These two projects were not immune to the other forces of internationalization. One seized on an upgrade in launch capabilities to broaden its scientific capabilities by bringing in a third nation to instrument what had originally been an inert part of the spacecraft. The other spread funding burdens across more contributors by offering responsibility for a spacecraft system that was easily isolated from the science payload to a third flight center. However, in both cases the instigating institutions fit the additional participants into a scheme the instigators had already set. The initial thrust to internationalize lay in the instigators' inability to assemble the needed expertise on a national basis for projects designed to stretch the social fabric of scientific sub-communities.

Mundane logistical difficulties of meeting and communicating accompanied the internationalization of all these projects, but more serious difficulties were concentrated in the projects that had to coordinate expertise. Most important were managerial stresses stemming from the different policies of different space agencies or from the independence of the flight centers. The project coordinating multiple spacecraft that performed different functions [AMPTE] needed to pool data streams to get the benefits of the coordination. American scientists could readily accommodate this as an incremental change, for they were accustomed to NASA requirements that they archive their data. But the European scientist-instigators faced a difficult selling job, for European scientists were accustomed to controlling the use of their data without regulation from ESA (which had no authority over data because it did not fund the experiments). The project that built closely linked components of the scientific payload in different flight centers [IUE] could not shift personnel across national borders in response to problems; the result was probably some extra expense, as people on one side were paid to wait for the other's work, and there were some prickly feelings about who delayed whom.

All of these projects overcame difficulties rooted in their internationalism, and most interviewees spoke warmly of their interactions with colleagues of different nationalities. However, in only one of our cases was there any prior history of collaboration among any combination of the project scientists, project managers, scientist-instigators, or PIs, and nobody spoke of ongoing interests in developing future joint projects with their prior collaborators. International projects, while necessary and desirable, apparently exhausted their participants.

Numerous interviewees noted a recent loss of effectiveness of international agreements as a tactic for securing a project's place in NASA's budget. They usually identified the annual political review of NASA's budget, in contrast to ESA's five-year budget cycle, as the cause of NASA's problems in keeping multi-year commitments to international projects.

The frustrations of Europeans with NASA was somewhat tempered by the realization that the NASA funding structure has favored Europeans over Americans in the competition to fly experiments. Because NASA has funded both experiments and spacecraft construction, NASA could reduce its fiscal burden for a project by accepting a European experiment, which would be funded by the scientists' national government(s). ESA, however, has only funded spacecraft construction; therefore, it has had no fiscal incentive to accept American experiments. A NASA administrator, who decided to drop a European experiment as the scientifically least indispensable part of a payload that was outstripping the spacecraft's weight and power resources, recalled the decision was difficult because it did not ease the project's budgetary pressures on NASA.

B. Internationalism in Experiments
[Table of Contents]
Five of the six projects we studied in space science had formal international collaborators on one or more experiment or user teams. While we lack data from which to measure the prevalence of internationalism over time, the qualitative impression of interviewees is that social and technical forces are encouraging the internationalization of experiment teams.

Internationalization on the experiment level, as on the project level, had obvious fiscal and political advantages. Interviewees often cited spreading the costs of an experiment across governments as a virtue in the eyes of the funding agencies. International experiment or user teams that divided responsibilities with rough equality could use political expediency to determine which member served as PI for a proposal. In trans-Atlantic projects, an American scientist would typically be PI on a proposal to NASA and a European on a proposal to ESA. For an ESA project, international experiment teams would put forward as PI a citizen of a nation that was expected to generate relatively few proposals, because ESA's Science Policy Committee was presumed to be sensitive to the distribution of PIs among nations. However, when one nation's institutions absorbed most hardware responsibilities, the designation of another nation's scientist as PI for the experiment could be interpreted as an act of political chicanery; participants in one project did view one proposal (which was rejected) as such an instance. The national funding of experiments for ESA projects also motivated Europeans to internationalize experiment teams in order to guard PIs from funding problems. In two instances, PIs felt their international collaborators saved their experiments by coming up with resources to trouble-shoot unexpected technical problems that the PIs' national governments and home institutions were not prepared to cover.

Unlike at the project level, where internationalism sometimes consisted of coordinating the operations of independently built spacecraft, at the experiment level, the fiscal and political benefits of internationalism were available only to teams that divided among nations the technical labor for design and construction of single instruments. It is testament to the power of internationalism's advantages at the experiment level that international experiment teams treated interface and integration problems as challenges to technical cleverness. Even the administrative burdens of strict adherence to export- and technology-control regulations were an acceptable price for the benefits of internationally dividing hardware responsibilities.

Scientists who only analyzed data, and who were not citizens of the PI's nation, did not bring the political and fiscal benefits of internationalism. When contributors to an experiment's instrumentation wanted additional co-investigators to do data analyses for topics that were outside their interest or expertise, they usually found domestic collaborators. In one of the few instances where a European scientist was added to analyze the data of an American experiment team, his colleagues appreciated his aggressive pursuit of topics they were not covering, but also found he brought a penchant for redoing work in order to have his own nation's version of the data. Only when teams faced stiff competition for a slot on a spacecraft did they recruit foreign scientists with high prestige or rare expertise to boost the credibility of their proposals. Otherwise, experiment teams preferred not to avoid the risk of nationalist tensions within team dynamics.

[Table of Contents]
Space-based research has been a risky foundation for a scientific career because of a combination of two factors. First, space-based data acquisition has been beset with problems beyond scientists' control—launch failure, spacecraft problems, or unlucky encounters between instruments and meteors or cosmic rays. Second, space scientists have not comprised a discipline unto themselves; American scientists who depend on space-based instrumentation to provide data join such societies as the American Physical Society, the American Astronomical Society, or the American Geophysical Union but have never formed something comparable around space science. Consequently, they must compete in these disciplines with ground-based observers and laboratory experimenters with more secure sources of data. Space scientists' careers have been based on the presumption that the quantity and quality of scientific papers that can be written on the basis of a successful space-based experiment justify the risk of total failure.

Intensifying the high-risk character of space science careers is the impression of virtually all interviewees that the length of time needed to prepare space projects for launch has been increasing and the number of flight opportunities has been decreasing. Although the greater length in project durations is assumed to be accompanied by larger project budgets and science payloads, the extra money and instrument slots, interviewees believe, not only fail to compensate for the smaller number of flights but also add unwelcome administrative burdens and political interest in the disbursement of research funds. Some scientists advocate and practice a reconsideration of their institutions' roles in the creation of future space scientists.

The greater difficulty of pursuing space science is predictably lamented by instrument designers, especially those based at universities. They believe that graduate students find participation in space-based hardware projects to be too long and risky a route to an advanced degree in comparison to other possibilities. Of the graduate students we interviewed, only one had tried to pursue a hardware project, which personal circumstances forced him to drop, and he, like the other students we interviewed, launched his career on his ability to extract important measurements from data generated by instruments he had not helped to build. Some instrument designers also felt a lack of academic recognition for their activities. One in particular wonders whether his academic career was slowed because as a postdoctoral scientist he was consumed by hardware problems while working for a PI, when he could have better kept up with the literature, assumed some academic responsibilities, and still had access to the data (though less technical sophistication in analyzing them) while working for one of the PI's co-investigators. Another scientist found that his work on project hardware qualified him mainly for industrial positions he did not want or for another post in a generically similar project. He characterizes scientists as playing "chicken" with each other—keeping the rewards for instrumentation development low and assuming "that somebody's going to be foolish enough to spend some time building something for the sake of the community."

By contrast, scientists who have participated in a project but not had direct responsibility for instrumentation have happily prospered in academic niches, provided they have been able to learn enough about an instrument to use it with imaginative sophistication. They envision a future in which scientists at many universities can work with students on data from instruments built at a few institutions. Instead of complaining how graduate students are failing to acquire sophistication in instrumentation, they extol the ability of graduate students to use computers to combine and manipulate ever larger data sets for new ends. Instead of having to support a research and development group or to negotiate their way onto proposals as co-investigators, they value opportunities that can be seized without being socially well connected. (This last point was especially important to a woman we interviewed who faced sex discrimination.) In their eyes, specialization between instrument-builders and data-analyzers encourages a greater flourishing of the several talents needed to advance science. The increasing influence of such scientists is exemplified in NASA's designation of "interdisciplinary scientists" for more recent projects than those studied here. Interdisciplinary scientists serve on the SWG and can examine data from multiple instruments, but they do not provide instrumentation.

The qualitative impression from the interviews is that common values and interests between the two groups will prevent internecine conflict. Most scientists we interviewed assume that an excellent instrument makes possible a multitude of measurements. Most see the scientific legacies of successful missions, even the ones that had a focused goal, to be a catalogue of new or improved measurements rather than a few compelling findings. And most assume that the instrument builders concentrated on skimming the more straightforward results from their individual instruments in order to build a record to justify their next proposals. Most PIs pragmatically conclude that their teams can only do a fraction of the science that their instruments make possible and not only a sense of duty to the community but also their self-interest obliges them to welcome extra-mural users of their data. A few of our interviewees, including some instrument builders, are actively interested in reprocessing old data or pooling older data streams in the belief that interesting future results will come from handling data with greater sophistication. Such ferment is more prevalent than the complaints of one older instrument designer, who refers to interdisciplinary scientists as "undisciplined scientists" with too many rights and too few responsibilities.

Space scientists believe the cause of their changing circumstances lies somewhere in NASA Headquarters or the political environment in which NASA has been operating. The advantage in the politics of funding enjoyed by large missions that can accommodate many scientific interests has set off a positive-feedback cycle in the generation of new projects. The complexity and expense of the spacecraft for the large projects oblige the space flight centers to contract out their construction and integration. Project managers who enjoy having direct control over spacecraft design and construction have thus been leaving the flight centers for the contractors. Those left at the flight centers are more accustomed to overseeing contractors and less enamored of in-house construction of smaller spacecraft more tailored to more specialized uses. This change in flight center character then feeds back into a preference for larger, contractor-built projects.

[Table of Contents]
The space science projects we studied always structured formal communication in a hub-and-spoke fashion. However, within any single project, the office at the hub and the importance of the hub in comparison to the spokes shifted with stages of the project. Across projects, the same office has not always been at the hub for the same project stages. Consequently it is difficult to cast trustworthy and meaningful generalizations.

Several institutional settings have served as communication hubs during the formation of space science projects. Most prominent have been the NASA and ESA space flight centers, whose engineers' self-interest behooves them to help dispersed scientists centralize discussions of their spacecraft needs. Moreover, the NASA flight centers have research scientists who look for novel spacecraft configurations that scientists across dispersed institutions would find appealing. However, other institutions in both the United States [Johns Hopkins Applied Physics Laboratory, American Science & Engineering] and Europe [Rutherford Appleton Laboratory, Max Planck Institute for Extra-terrestrial Physics] have also successfully functioned as hubs in the formation of space science projects on the strength of an assemblage of appropriate scientists and engineers. NASA Headquarters officials have told us that for projects more recent than those we studied, advisory committees of scientists to officials at NASA and ESA headquarters or to national academies have served more as hubs for forming projects and less as reviewers of others' plans.

Whoever conceptualized a space science project had to elicit encouraging responses from "spokes" in the scientific community. Scientists at NASA flight centers had straightforward ways for floating ideas. By trying to organize a meeting for outside scientists to discuss a project's possibilities, or by requesting outside scientists to submit proposals for how they would use a project's data, they could demonstrate (or discover a lack of) widespread support among scientists for a project. Engineers at flight centers turned to formal advisory bodies of scientists to provide a demonstration (or denial) of scientists' support; in such cases the advisory bodies themselves became hubs into which information about the project flowed. Scientists outside the flight centers, by contrast, had to use informal communication to convince prestigious peers to contribute to unsolicited proposals to form projects.

Agency headquarters have been the initial target of those involved in planning projects. A common, but not always successful, tactic has been for planners to convince a headquarters "discipline scientist" or "division chief" to become a partisan of the project. They feed that individual the information needed to push the project through the politics of obtaining funds. That has been easier for scientists at flight centers than outsiders, who have sometimes lobbied unsympathetic or ineffective headquarters scientists. The outsiders who succeeded usually impressed themselves on headquarters by enlisting the support of scientists or engineers at flight centers or by obtaining endorsements from prestigious advisory bodies. Once a project obtained effective advocates at headquarters, those advocates became the communication hubs, linking the scientists and engineers who produced technical plans to the broader politics of the agency's budget.

When headquarters secured funding for a project, it imposed a formal organizational structure that specified communications channels. A project manager—an engineer at a space flight center—had authority over the project's budget and schedule. While a spacecraft was under construction, the project manager's office was usually the chief communications hub for technical information. PIs individually negotiated with the project manager the interfaces between their instruments and the rest of the spacecraft. PIs also met as a SWG to consider collective concerns about the spacecraft's engineering or their strategy for taking data. One scientist, who was usually employed by the project manager's flight center, was designated "project scientist" and served as the communications hub between the scientists as a group and the project manager[15]. The project scientist also reported to a program scientist at agency headquarters, who could raise issues with the program manager—the project manager's superior—or their mutual superiors. Thus the scientists on a project had a communication chain to reach headquarters if they feared the project manager's engineering judgements were undermining a project's scientific viability.

This sketch of intra-project communication describes projects that flight center scientists or engineers have advocated far better than it describes projects that scientists outside the flight centers planned. In the latter cases, the outside scientists have wanted to vest managerial authority in their own institution's engineers and to reduce or even evade the scrutiny of the official project manager. In the course of planning and advocating the project, the leading outside scientists secured the moral authority to speak for other participating scientists well before any project scientist was appointed. The official project scientist has consequently kept a far lower profile. Only the roles of program manager and program scientist remained the same; they responded to conflict within the project but otherwise interjected themselves only when projects were large enough to affect or be affected by the agency's budget.

Once spacecraft were constructed and launched, project managers ceased to be important communications hubs, but project scientists continued to lead SWG meetings. While these meetings were obvious occasions for scientists to compare data and preliminary findings, project scientists and leading outside scientists almost never brokered or pressured experiment teams to exchange data or work jointly on scientific topics of common interest. All such communication occurred (or failed to occur) among the relevant PIs. No project we studied set up any system for project-wide review or approval of scientific publications; when scientists participating in a project arrived at conflicting measurements or disagreed on the interpretation of their findings, they discussed their differences outside project auspices.

(By Joan Warnow-Blewett and Anthony J. Capitos)

[Table of Contents]
This report is based on a number of sources: (1) the archival assessment of 90 interviews on the six selected cases for space science; (2) the patterns uncovered through the historical-sociological analysis of these interviews; (3) information from 34 questionnaires returned by interviewees concerning their record-keeping practices; (4) discussions with archivists at home institutions of interviewees; (5) five questionnaires from archivists/records managers at these home institutions; (6) eight site visits to the National Aeronautics and Space Administration (NASA) Headquarters and three of its flight centers; (7) site visits to the National Archives and Records Administration appraisal archivists; and (8) the AIP Center's general knowledge of archival institutions in various settings.

Prior to this project, the AIP Center had few contacts with NASA and its flight centers. Site visits to scientists, administrators, and records officers at NASA and the European Space Agency (ESA) headquarters and flight centers were critical. Also key were the historical-sociological analyses of project interviews, rich in data on the institutional structures and functions that had the greatest impact on the initiation, funding, planning, management, and operation of projects. The findings from the project's analyses and site visits, filtered through our previous knowledge of archival institutions, provide the single most reliable guide to identifying areas of documentation problems and opportunities.

A. General Observations
[Table of Contents]
In the field of large space science collaborations in the United States, NASA is virtually the only player. Not only does NASA provide the funding for space science experiments, it provides the institutional structure for the project through its flight centers. Space science projects have formal record-keeping requirements related to this bureaucratic structure. Also, since participating scientists create individual instruments which have to be integrated into a single spacecraft, considerable formally documented interaction between flight centers and the experiment teams is required. The situation is very similar for ESA and its flight center. For these reasons, substantial documentation is virtually always created by space science projects. The creation of records does not, of course, equate with saving those records.

Outside of NASA, creating and saving records is largely based on the personal inclinations of participants.

B. Data on Categories of Records
[Table of Contents]
We begin with data our study gathered on the six selected cases in space science.

  1. Collaboration-Wide Mailings
  2. Electronic Mail
  3. Scientific Electronic Data

C. Circumstances Affecting Records Creation
[Table of Contents]
The bureaucratic structure imposed by NASA—especially at the flight centers—means that certain offices are held responsible for specific aspects of NASA projects and are expected to create specific categories of records. Because of this, records are created almost regardless of the circumstances of the particular instrument (such as number of member institutions and geographical distribution). At the NASA Headquarters level, however, more documentation is generated for joint projects with space agencies abroad, and for large budget missions requiring annual congressional approval.

D. Location of Records
[Table of Contents]
Our investigations located a small number of categories of records (less than a score) that, taken as a whole, provide adequate documentation for all multi-institutional collaborative research. For any one project these core records are located at several settings. The main locations of records in the United States are at the National Academy of Sciences in its Space Studies Board records (previously the Space Science Board), in the hands of discipline scientists, program scientists, and program managers at NASA Headquarters, project scientists and project managers at NASA flight centers, and principal investigators (PIs) of project experiments (instruments). In European Space Agency projects, project managers and project scientists at ESTEC (the European Space Research and Technology Centre) and PIs at universities and research institutes generate records similar to those of their United States counterparts. However working groups of ESA's Science Program Committee, not the national academies of the several nations, generate the records most similar to the United States National Academy of Sciences and the NASA discipline scientists. Additionally, funding agencies of the several nations involved in each mission independently pass judgement on proposals to build experiments for ESA projects.

[Table of Contents]
Institutional archival policies and practices are key to the preservation of documentation. In our six case studies, 64% of the participants interviewed came from government agencies (NASA, ESA, or their flight centers) or government-funded laboratories (e.g., the Rutherford Appleton Laboratory in England). Only 35% came from academia with the remaining 1% from industry. We address archival practices at the two main settings for members of collaborations in space science during our period of study: NASA and ESA with their flight centers, and academia.

[Table of Contents]
1. National Archives and Records Administration Records Schedules

The National Archives and Records Administration has determined that its General Records Schedules should not govern the disposition of research and development records. Previously, the GRS had guidelines concerning the disposition of these records in its Schedule 19, but it was decided that many of the records identified by this schedule were not uniformly created by the Federal agencies. The National Archives now requires each agency to schedule its research and development records to best fit the agency.

The process for developing a new records schedule manual is long and consists of many steps. It typically begins when the National Archives writes a report expressing the need for revisions in the old records retention schedules. Next, the agency writes the revisions and returns them to the National Archives for review. Each National Archives division has a chance to comment on the new manual prior to a wider review by some 50 individuals involved in the use of the new schedules—individuals at the National Archives headquarters, at its Federal Archives and Records Centers, and records managers at the agency. Depending on other initiatives, particular sections of the records schedules may be held back. For example, at the time of the AIP Study, the schedules for electronic records of scientific data were held back pending a study on the subject by the National Research Council. After reviews and revisions, the schedules are published in the Federal Register for a forty-five day response period. Once this period is over, the records schedule is sent to the Archivist of the United States. If the Archivist signs off on the new schedule, the manual is authorized to be used by the agency.

2. NASA Records Schedules
The National Aeronautics and Space Administration (NASA) has a new records schedule currently under review at the National Archives. Written by the records manager at NASA, this new manual replaces the previous 27 schedules (which were arranged by subject) with a manual of 10 schedules arranged by function. According to the NASA records manager, these 10 schedules better reflect the workings of the agency. At this time only the few schedules concerning administrative records have passed the National Archives's review. The schedules concerning research and development records are still being evaluated not only by the National Archives, but by the Federal Records Centers involved and by others who will be affected by the changes in the schedules, chiefly the flight centers. The old records retention manual created a lot of appraisal problems requiring—according to some—the reappraisal of many of the records scheduled under it. This new manual has been under development for approximately four years.

The new NASA records schedules are written in a very general manner in order for the manual to be applicable to both NASA Headquarters and the flight centers. Only records of the upper level management offices at Headquarters are specifically discussed; the mid-level Headquarters scientists are fit in to other functional locations. For example, the term "program" and the term "project" are interchangeable in this new schedule, even though—in NASA parlance—program scientists and program managers are Headquarters positions and project scientists and project managers are at flight centers.

3. NASA Record-Keeping Practices
It is not enough to review the records schedules from Federal agencies; a review of the records management program which will implement the schedules is equally important. When discussing records programs with agency records officers, their description of the programs and the proper use of the records retention schedules may differ from the actual implementation by agency employees. Our discussions (primarily site visits) included NASA records management staff at both NASA Headquarters and the three flight centers involved with our selected case studies, discipline scientists at Headquarters, branch scientists at the flight centers, and the National Archives appraisal archivist, along with general interviews with NASA program scientists.

NASA's records management program reflects the relationship that NASA Headquarters has with its flight centers. Each of the flight centers has its own specialized interests and takes a somewhat independent stance from Headquarters. Records managers at flight centers feel entitled to interpret the records retention schedules to best fit their particular center. Although the new NASA records retention schedule is an improvement over the previous one, its reception has not been overwhelmingly positive. While one flight center's records manager accepted the future adoption of the manual, another stated that it was not appropriate for his flight center. Part of the problem could stem from the fact that the new records retention schedules are written in such general terms that the flight centers have difficulty in understanding how they are to be applied.

A second problem which arises from this schedule is the assumption that NASA scientists are actually filing their records according to standard procedures. The NASA records management program assumes that central filing procedures are being used along with a uniform filing index. Our interviews with several discipline scientists and project scientists showed that records were not being handled according to these procedures. As with scientists in most federal agencies, fear of the destruction of records combined with lack of knowledge about records management procedures has made NASA scientists keep their records in their own offices. According to one discipline scientist, concern about the retention of his records prompted him to send them to the NASA History Office; no one from the History Office has ever contacted him about these records. Another NASA scientist stated that he has kept some records from the projects he was involved with but doesn't know what will happen to them since no one during the many years he as been with NASA has ever been in touch with him about records management.

NASA records management has never taken a proactive approach to the selection and retention of records. According to one flight center records manager, the only records they accession are ones that people send to them voluntarily. At the start of a large project, the flight center records manager stated, they will ask the project administrators to remember records management at the end of their project. Unfortunately this seems to be the norm for NASA flight centers, due to the lack of staff to perform more proactive duties. Records management at NASA flight centers has mostly been a neglected area; some have a few full-time employees, some contract employees, and others only one half-time employee. During our site visits to flight centers, a search by records management staff could turn up no records for two of our case studies. We also found that records of some of the founders of a flight center had been removed to a private institution.

Only one of the NASA flight centers has a fully established archives and records management program. The Jet Propulsion Laboratory (JPL) in 1989 established an archives and records management program with six full time employees, including an archivist. Along with this program, JPL has included an oral history component to supplement the historical materials kept at their archives. This program could be the model for the rest of NASA's flight centers, and some at the National Archives are pleased with what JPL has accomplished.

On the other hand, NASA Headquarters is not entirely pleased with the situation at JPL. JPL, unlike other flight centers, operates under a NASA contract (with the California Institute of Technology). Due to this arrangement, JPL does not feel it creates Federal records other than the required deliverables and does not have to follow the NASA records management program. JPL's primary reason for disliking NASA's program is that the language of the records schedules, along with the types of records covered, do not fit well with the language and records of JPL. In addition, records in the JPL archives are only available to public researchers after they have been reviewed and cleared by the original creating office. Although JPL has established an archival program, access to these records—essential for historians—has been problematic. Under the new contract NASA and JPL signed recently, if JPL wishes to use Federal Records Centers, it must employ the newly revised NASA uniform filing index. However, according to JPL, the contract still seems unclear as to whether JPL is creating Federal records or not. Currently, JPL is developing its own records retention schedule, which will not be based on the new NASA schedule.

Finally, our data show that most NASA project managers don't retain many project records after the spacecraft has been launched. These records are transferred to the operations group, who have the opportunity to take what they need in order to keep the spacecraft in working order; the rest of the documentation is sent to records storage. Project scientists, on the other hand, often stay with the project through its lifetime. According to our questionnaires and interviews, when project managers leave or retire from NASA, they tend to leave any remaining records in their offices for someone else to deal with.

B. European Space Agency and ESTEC
[Table of Contents]
The European Space Agency has begun to deposit its archival records at the European University Institute in Florence, Italy. The Agency is also preparing an inventory and record schedule for its administrative records. One of the concerns of the European Space Agency is the cost involved in setting up an archival program; they don't expect to be able to afford a professional archivist to help select and prepare the documents for transfer to Florence. Most of the records sent to Florence thus far are the official documents for the delegate bodies from 1964 to date, consisting of policy documents, technical establishment documents and general technical documents. According to ESA, ninety percent of these records have been recovered and are presently being sent to the archives.

The records which ESA produces comprise three levels of documentation. The first is the "blue" documents or papers. These are the official published documents as well as the minutes of the Science Program Committee. Also included are the documents of the Space Science Advisory Committee, consisting of minutes and related documents of its meetings and working groups. The next level is the "yellow" books, consisting of assessment reports on preliminary plans for proposed projects. Third are the "red" reports, which are the phase A reports for specific projects. Along with these records ESA has industry studies and reports, contracts, correspondence, etc.

ESTEC is the lone European Space Agency flight center, situated in the Netherlands. The types of unpublished records ESTEC produces include: monthly reports, division meeting minutes, correspondence, and some e-mail, along with the technical project records. ESTEC sends selected records to the European University Institute in Florence. The records ESTEC places highest priority on are the technical project records. During a site visit there we urged that the records staff consider the importance to future administrators and scholars of correspondence and other, less technical files.

C. Academic Archives
[Table of Contents]
About one-third of our interview subjects were employed in academia, some as PIs and others as members of an instrument team. On academic sites there is a long-standing tradition of documenting the full careers of outstanding faculty (this is particularly true in English-speaking countries). Professional papers of space scientists with distinguished careers would qualify for acceptance by most academic repositories following well-established procedures. We cannot afford to be too optimistic, however; current academic archives programs are suffering from reduced resources.

D. Subcontractors
[Table of Contents]
In general, we found that some aspects of the relationship between scientists and industry are relatively well documented. The two most common aspects are the purchase of "off-the-shelf" items and the fulfillment of a formal contract. The former is documented in product literature (such as company catalogs), the latter—to some extent, at least—in the legal contract. Because the selection of a contractor is in many cases a very formalized process (notably in ESA projects), many documents are generated by companies bidding on contracts and by project evaluators.

Other aspects are poorly documented. It is probably the case that most of the innovative engineering that takes place in industry never results in a publication, not even in the form of a thorough internal technical report or memorandum. And much of the give-and-take between collaboration and company personnel occurs in person, by e-mail, or in some other way not producing documents that are usually retained. Another problem concerning this documentation arises when subcontracts go to unstable companies that may go out of business before long; there is little chance that their records will be preserved. One of our cases provided a good example of this problem. It arose in the development of the fast electron experiment flown on the ISEE missions. The electronics of the instrument were contracted out to Matrix Research and Development, a company which has since gone out of business.

(By Joan Warnow-Blewett and Anthony J. Capitos)
For the AIP project's recommendations on general steps that institutions should take to improve their documentation of large collaborations, see Report No. 1, Part B: "Project Recommendations."

[Table of Contents]
The purpose of these guidelines is to assist archivists and others with responsibilities for selecting records for long-term preservation. The guidelines are based on two years of field work by the project staff of the American Institute of Physics (AIP) Study of Multi-Institutional Collaborations in Space Science and Geophysics; they were reviewed by the study's Working Group, composed of eminent scientists and science administrators in these disciplines as well as archivists and historians and sociologists of science.

The scope of these guidelines is records created by multi-institutional groups that set national and international policy for space science or that participate in collaborative research projects. A multi-institutional collaboration consists of groups of scientists from a number of institutions and the flight center where the project is managed. The collaboration joins together for a period of years to design and construct apparatus and software, collect and analyze data, and publish results. Close to 100 interviews with members of six collaborations were conducted using structured question sets to cover all phases of the collaborative process and the records created by the activities; science administrators were also interviewed.

These appraisal guidelines identify what kinds of evidence are needed to provide adequate documentation of all large, collaborative projects in space science. Because the trend in space science collaborations during the period of the AIP Study (from the early 1970s to the near present) has been toward larger, longer, and fewer projects, we have felt no need to offer guidelines for identifying significant space science projects, but instead recommend that documentation be saved for all large space science collaborations. Outside the scope of these guidelines are the records created by other activities at the government laboratories, universities, and other institutions involved, and by other activities of individual scientists. We recommend different appraisal guidelines for these materials[16].

Guidelines for the selection of records should emphasize the kinds of information, or "evidence" required to document a collaboration's activities. We have endeavored to take into account future needs of scientists and administrators in science policy and management, as well as historians and sociologists of science. Appraisal guidelines are not fixed rules; they are informed recommendations that require interpretation by those who select records. Since functional analysis is a useful tool for selecting records, we begin our guidelines with summary statements about the key activities of space science collaborations and include references to the categories of records created in the process.

Appraisal guidelines require unending revision. As the process of collaborative research changes (and we have seen such changes during the 1970s and '80s), the kinds of evidence needed will be altered. Equally important, the formats of the evidence will change. The records described in these guidelines are virtually all paper files, but the media are shifting towards electronic records. Many records (such as correspondence, logbooks, and a variety of other files) are widely created in electronic formats; archivists are already experimenting with ways to retain these records in electronic format for future use by historians and others[17]. Further, technology may soon permit computer data on microfiche (COM) to be transferred back as needed to the contemporary electronic formats. We need to watch the new technologies and try new solutions for securing adequate documentation[18].

Over the next few years, our guidelines for space science and geophysics (as well as those from our earlier study of high-energy physics) will be sharpened by the practical experience of reviewing files and taking the steps to preserve them. The appraisal guidelines will be extended by the AIP Study to cover other areas of physics and allied sciences. At the end of the AIP Study of Multi-Institutional Collaborations, other disciplines of science and technology will be able to use these guidelines as a basis, extending them into their own fields.

There is ample evidence that multi-institutional collaborations are becoming more and more important. Archivists should increasingly view such collaborations as a major part of the way science is done, recognize their scientific staff as participants, and consider that the documentation of this participation should be a serious candidate for preservation as key records of our civilization.

Eventually, appraisal of records documenting collaborative projects in other areas, such as industry and banking, could make adaptation of these guidelines germane to a wide range of important combinations of institutions in modern society. Thus, archivists outside the arenas of science and technology should consider these recommendations.

[Table of Contents]
Records are created in the process of carrying out activities or functions. The most effective approach to appraisal of records is "functional analysis," in which important functions are identified and then the best documentation of these functions is located and preserved.

The key functions of all scientific activities can be summarized as establishing research priorities, administration of research, including development of instrumentation, the research and development itself, and dissemination. What follows is a brief analysis of these functions along with the categories of records created through these activities and references to the sections of these guidelines where further information can be found[19].

A. Establishing National/Multi-National/Discipline Research Priorities
[Table of Contents]
1. Hypothesizing and Defining Priorities

In space science the plans for a project or mission are well-defined prior to the selection of the scientific staff. Until recently, ideas for specific projects or missions were typically generated by scientists and engineers at NASA's flight centers. Currently, at NASA Headquarters there are levels of working groups, from the discipline scientists' management operations working group up to the NASA Advisory Council, which help structure and define the scope of projects to be promoted. These committees are comprised of members of the space science community, including space scientists located at NASA's flight centers. It is through this structure that NASA's discipline scientists are able to campaign for support for particular projects. As ideas for new projects are passed up through this advisory structure, they are better defined and ideas for particular instruments to be included are identified.

The Space Studies Board of the National Academy of Sciences (NAS) is charged with the responsibility of suggesting broad areas of research in which NASA should focus its efforts. This Board, with its sub-committees, is comprised of distinguished members of the space science community. It has been both a brake on flight center plans that seem too grandiose and an alternative route through which scientists from outside the flight centers can campaign for projects.

This advisory structure does not seem to exist for the European Space Agency as formally as it does for NASA in the United States, although the European Space Science Foundation's European Space Science Committee hopes to fill the role of recommending broad areas of research for ESA in the manner of the NAS Space Studies Board. Advice and reviews on specific projects are considered by the Science Program Committee's working groups.

Documentation: National Academy of Sciences' Space Studies Board; European Space Science Foundation's Space Science Committee; NASA Headquarters Working Groups.

NASA Headquarters may ask a flight center to do a pre-phase A study under the direction of a lead engineer to judge the feasibility of the proposed project. If the outcome is positive and funding is secured, NASA asks the center to do a full-fledged Phase A study for the instrument payload and asks a center or subcontractor to do a Phase B study for the spacecraft.

Documentation: NASA Headquarters discipline scientists and program managers, NASA flight center project managers, and lead engineers who oversaw Phase A studies (not all of whom became project managers).

Once a project is defined and funding obtained, the NASA Headquarters discipline scientist usually becomes program scientist and a Headquarters engineer is appointed program manager. Together with the project manager at the flight center, they prepare an Announcement of Opportunity (AO) to be issued by the Associate Administrator for Space Science. This AO defines what types of instruments are requested for the NASA mission and what technical restraints these instruments must operate within. Individual teams propose their instrument ideas to NASA Headquarters. Following a standard peer review of the scientific merits of the proposals and an engineering assessment of their feasibility, the Associate Administrator for Space Science makes the final decision concerning which instruments will be included on a project's payload.

Documentation: NASA Headquarters discipline scientists and program managers, the NASA Associate Administrator for Space Science, and flight center project managers.

For projects of the European Space Agency, ideas are proposed through the agency's astronomy or solar system working groups which, in turn, report to ESA's Space Science Advisory Committee. The recommendations of this advisory committee are sent to ESA's Science Program Committee, which makes the final decision concerning which projects ESA should pursue. The Science Program Committee issues AOs for instruments for successful projects.

Documentation: ESA working groups, ESA Science Policy Council, and ESA Program Committee.

2. Funding
NASA space science projects that are sufficiently large or are part of large programs are funded through dedicated line-items of the NASA budget and face annual reviews from the Congress and the Office of Management and Budget. Smaller space science projects fall into the Explorer program, an annually approved budget line item that does not specify a particular project, and leaves the decision as to which project to fund with NASA's Associate Administrator for Space Science. Since NASA is both the funding agency and the planning agency, these processes occur concurrently. The most detailed budget records concerning the funding decision-making process would be located at NASA Headquarters with the program manager responsible for the individual project.

Documentation: NASA Headquarters program managers, NASA Associate Administrator for Space Science. Additional documentation, not dealt with by our study, will be found in the records of university administrators, records of the Office of Management and Budget, and records of the U.S. Congress.

The European Space Agency's budget is approved in five-year intervals, thus removing the concern of yearly appropriations for the project. This funding is used for the construction of the spacecraft itself. The individual instruments for an ESA project are funded by the appropriate government agency of the country whose scientist is chosen to provide the instrument, not ESA itself.

Documentation: Funding agencies of participating countries, professional papers of principal investigators (PIs), and with informational copies commonly sent to the project managers at ESA's flight center, the European Space Research and Technology Centre (ESTEC).

If a European experiment is included on a NASA project, the European country pays for the construction of the instrument, much as for an ESA project. This process also works in reverse for American investigators included on European projects. No money is exchanged between countries during these international projects; therefore, the records concerning the funding of an international project will be in multiple locations.

Documentation: (For NASA instruments) NASA Headquarters discipline scientists, (for European instruments) funding agencies of participating countries, professional papers of PIs. If the instruments are part of a NASA mission, there would be funding records with the Headquarters program manager responsible for the project; for ESA missions, these documents would be with the project manager at ESTEC.

B. Administration of Research and Development
[Table of Contents]
1. Establishing Project/Mission Research Priorities

The Science Working Group (SWG), which is comprised of the PIs involved with a project and chaired by the project scientist, establishes the details of the scientific strategy for a particular project. These groups have been especially important in establishing the scientific priorities for planetary encounters and for astronomical satellites with relatively low orbits and thus short operating potential.

Documentation: NASA flight center project scientist.

The engineering of the entire spacecraft falls under the responsibility of the project manager. The project manager has the final word on all budgetary and technical problems once the project is under development, except for appeals to the program scientist.

Documentation: NASA flight center project manager.

2. Staffing
Experiment (instrument) teams are formed to respond to individual AOs from NASA Headquarters; at their core are stable groups of scientists developing instruments with grant support. The co-investigators are selected by the PI for the additional designing skills, scientific background, or prestige they can bring to the experiment team.

Documentation: Professional records of PIs.

3. Refining Scope of Responsibility of Project Management, Science Working Group, and Experiment Teams
Although NASA prefers a particular method for starting space science projects with predefined roles and responsibilities, not all projects are started through traditional channels and international projects require flexibility in the participating agencies. For these reasons, the scope of responsibility for the project manager, the SWG, and the experiment teams has to be refined as each project is initiated. In particular situations, the project manager may have little involvement in the overall project design or the SWG may have little voice in changes in engineering options.

Documentation: NASA flight center project manager and project scientist.

C. Research and Development
[Table of Contents]
1. Designing and Constructing Instruments

The function of designing and constructing instruments takes place at the experiment (instrument) team level. It is the PI and members of their team who are responsible for carrying out this function. Project management is less concerned with the design of the instrument than its electronic requirements and integration with the spacecraft. The PIs have the option of building the instrument at their respective facilities or contracting out the construction of particular instruments or portions of instruments. It is important to note that the development of most instruments used on NASA projects was funded earlier by NASA basic research grants administered by discipline scientists.

Documentation: PI professional records, NASA Headquarters discipline scientists.

The integration of the instruments into the spacecraft falls under the responsibility of the project manager. Most projects have an interface control document, which defines the way the experiments will be interfaced with the spacecraft. In some NASA projects the project managers have appointed instrument managers to deal with the PIs on instrument-spacecraft interfaces. In European projects, a project manager may appoint a "payload specialist" to serve a similar function.

Documentation: NASA flight center project managers and instrument managers.

2. Gathering and Analyzing Data
Until recently, the function of gathering and analyzing data has been the domain of the individual experiment teams. Instruments were built and data were processed with no one being able to make use of the data without the cooperation of original investigators; data streams were combined at the discretion of the PIs. Pressure from other scientists wanting to access data or use instruments has encouraged a trend towards more user-oriented projects, which impose standardized data processing that enables outside users as well as the original investigators to extract reliable results.

Documentation: Professional records of the PIs.

D. Communicating and Disseminating Results
[Table of Contents]
In space science, the communication and dissemination of results is not controlled by the entire project, but by individual PIs for their own teams. While in some teams topics are assigned, in most cases the PI is aware of the work being done by team members and there are no real requirements about who within the team specifically takes the lead in drafting on what topic. In space science, part of the "rules of the road" is that it is the responsibility of each PI to negotiate wider use of the team's data and arrangements for listing authors on publications that result. These issues may be discussed by the project's SWG.

Documentation: Professional records of PIs, NASA flight centers project scientist.

At the completion of the project, NASA requires the PIs to process their data for placement in the National Space Science Data Center for use by others. In recent years the quality of the metadata that make the data accessible to others has greatly improved, so that—for most users and most applications—it is no longer necessary for outsiders to contact the PI in order to fully understand the creation of the data and any peculiarities they might have. Exceptions are in cases where unusual or subtle details are needed from the data, such as in high-energy astrophysics, where it is important that users contact the PIs who understand the instruments and, therefore, the data.

Documentation: Records of the National Space Science Data Center, records of experiment PIs.

A. Records of the National Academy of Sciences' Space Studies Board [20]
[Table of Contents]
The Space Studies Board of the National Academy of Sciences is charged with the mission to advise NASA on which areas of science it should focus its efforts upon. Through disciplinary subcommittees, like the astronomy and solar physics committees, scientific communities are able to identify the areas of research best suited to their goals. For larger NASA projects, the Board forms special ad hoc committees to better define the science of the mission. Although NASA is not statutorily required to heed the Space Studies Board, Congressional attention to the Board's findings make its "blessings" essential to forming NASA projects.

The National Academy of Sciences should continue to save, as part of the Academy Archives, Space Studies Board minutes, reports, and correspondence; subcommittee minutes and reports; and working papers of ad hoc committees.

B. Records of the European Space Science Foundation's Space Science Committee
[Table of Contents]
This Space Science Committee hopes to play a similar role for the European Space Agency as that of the NAS Space Studies Board for NASA.

The minutes, reports, correspondence, and working papers of the European Space Science Foundation's Space Science Committee should be preserved as part of the ESA archives.

C. Minutes and Other Records of Working Groups of NASA Headquarters
[Table of Contents]
At the NASA Office of Space Science there are several important working groups. First are the Management Operations Working Groups (MOWGs), created for each discipline scientist; at this level scientists of the relevant disciplines seek to focus the needs of the profession into actual projects. Similar working groups exist up through the NASA hierarchy to its Advisory Council, which guides the NASA Administrator in making the final selection of projects and missions. It is through this structure that representatives of the space science community help NASA define and promote specific projects, including their payload of instruments.

The records (correspondence, minutes, recommendations, etc.) of NASA discipline scientists and their MOWGs should be scheduled for permanent retention by NASA Headquarters and eventually transferred to the National Archives. This recommendation also holds for NASA administrators and their working groups further up in the NASA hierarchy. These working groups and the administrators they report to are: subcommittees of the Space Science Advisory Committee for each NASA division, Space Science Advisory Committee for the Associate Administrator for Space Science, and the Advisory Council for the NASA Administrator. This documentation should also include the Office of Space Science's strategic planning records.

D. European Space Agency's Working Groups and Committees
[Table of Contents]
In a similar fashion to NASA, ESA has levels of working groups and committees. ESA's astronomy or solar system working groups reflect interests of their scientific communities in proposing projects. Their reports and proposals are passed upward to ESA's Space Science and Advisory Committee. Its recommendations are submitted to ESA's Science Policy Committee, which makes the final decision concerning which projects should be pursued.

The records, including minutes, reports, and correspondence) of these ESA working groups and committees should be saved permanently in the ESA Archives in Florence.

A. Grant Proposal Files of Discipline Scientists, NASA Headquarters

[Table of Contents]
Through its discipline scientists, NASA funds and administers research grants to develop instruments for possible future use as part of payloads for space projects.

These files provide the most effective, efficient documentation of instrument development for NASA space projects; they would include all accepted proposals, a random sampling of rejected ones, progress and final narrative and fiscal reports, and correspondence. They should be scheduled for permanent retention and eventual transfer to the National Archives.

B. Records of NASA Headquarters' Discipline Scientists as Study Scientists and Program Scientists
[Table of Contents]
As programs are developed, a discipline scientist is asked to become the "study scientist" for a project and later, when funding is secured, the program scientist. The program scientist is the audience for the Phase A work of a NASA flight center investigating the feasibility of a proposed NASA project. In addition, they develop Announcements of Opportunity and receive proposals in response. In general, the program scientist represents the scientific aspects of the mission to NASA Headquarters; responsibilities include representing the scientific community in the development of projects and acting as a conduit for bringing the project scientist's problems to higher authorities if the project scientist fears the integrity of the project's scientific goals could be compromised.

Files of discipline scientists documenting their role as study scientist and program scientist should be scheduled as permanent and eventually transferred to the National Archives. They would include correspondence, Announcements of Opportunity, documents concerning the selection of PIs, and project planning and development records.

C. Records of NASA Headquarters' Program Managers
[Table of Contents]
The program manager is the NASA Headquarters representative on a project. They are held accountable for the project's engineering decisions and its budgetary allocation process. Along with these duties they also prepare budgetary justifications and review the progress of the project for Headquarters. If there are any major changes in the budget or scope of the project, the program manager is the person responsible for making the decisions concerning the descoping or upscoping of a project.

The program manager's files should include project planning documents, project review documents, program development reports, correspondence, minutes of project or program meetings, budgetary records, schedule records, and program progress reports. They should be scheduled as permanent and eventually transferred to the National Archives.

D. NASA Flight Centers' Project Managers
[Table of Contents]
The project manager is the principal authority in matters relating to the project's engineering. Reporting to the program manager at Headquarters, the project manager is in charge of the budget for the project as well as being the principal contact for the sub-contractors needed to construct the spacecraft and for the scientists building instruments. The project manager's records are probably the most complete set of records of a project, especially concerning its technical and fiscal sides, from inception to launch. When the project moves into its operational phase, these records are sometimes broken up; while some are kept by the project manager for reference on future projects, most are sent to the operations manager (who handles day-to-day management after launch), who chooses the files judged most useful and sends the others to records storage.

Records of the project manager should be scheduled by NASA flight centers for permanent retention and offered to the Branch Archives of the National Archives. The records should include project approval documents, budgetary records, reports and presentations relating to design reviews, reports of test results, progress reports, correspondence with contractors, spacecraft status reports, and—at least in some cases—minutes of the SWG meetings. If records were transferred to the operations manager, they should be returned to the project manager's records when the project is complete.

E. NASA Flight Centers' Project Scientists
[Table of Contents]
The project scientist advises the project manager on the scientific aspects of the project and is also a PI on the project with responsibility for developing one of the payload experiments (instruments). They chair the SWG which is the forum for the PIs to discuss the scientific problems and plans for the mission. After the spacecraft is launched, PIs can submit proposals to the project scientist to receive further funding for analysis of the data.

The records of the project scientist should be scheduled as permanent by NASA flight centers and offered eventually to Branch Archives of the National Archives. The records should include SWG minutes and recommendations, correspondence with PIs, logbooks and other records about the use and operation of the instruments, and proposals for further analysis of the data from user/investigators.

F. NASA Flight Centers: Science Working Groups
[Table of Contents]
The SWG is comprised of the project scientist and the PIs involved on a project. Each is responsible for an experiment (instrument) of the payload. This group is the main forum for the investigators to discuss the scientific issues involved in a project. The project scientist, who is also a PI, acts as the chair and represents any scientific concerns of the group to the project manager or to other NASA officials. These concerns could include developing spacecraft trajectories, solving instrument interference, deciding the scientific topics to be addressed, and establishing priorities for the use of the instruments during flight.

The minutes and reports of this group and any correspondence between the project scientist and the PIs will give the best insight into the collective science aspect of the project. These records are kept as part of the project scientist's materials (copies of the minutes may or may not be kept by the project manager); in scheduling and transferring the project scientist's material, SWG records should get high priority. It is important to note that the project manager's responsibilities cease at launch, while the project scientist's continue.

G. Space Science Experimental Data
[Table of Contents]
The data—both raw and processed—are electronic in format and usually observational in character. Their usefulness for long-term scientific purposes is unquestioned.

Observational data should be provided by the original users with adequate metadata to make them accessible to secondary users. The preparation and retention of data should follow the recommendations of the National Research Council's report Preserving Scientific Data on Our Physical Universe: A New Strategy for Archiving the Nation's Scientific Information Resources[21].

Professional Papers of Space Scientists

[Table of Contents]
Each space project or mission has a PI and associated team for each experiment (instrument) in the payload. These teams are almost always made up of individuals from academia or government-funded research institutes (including NASA flight centers); they are often multi-institutional collaborations in their own right. Within the teams, the PI is equivalent to both the project manager and project scientist, making decisions on the level of engineering risk to assume in the pursuit of scientific capabilities. The PI is in charge of, and accountable for, developing and providing the instrument proposed for the project. Some members of the experiment team specialize in instrument design and construction while others specialize in data analysis. Most of the space science instrument teams we studied used existing instruments that had been tested by industry and modified by the teams, under NASA grants, for the harsh realities of research in space. These teams did not work with other teams on the project in designing their instruments nor—for the most part—in analyzing their data. There was usually little control or dispute over who published what within a team, and teams usually had to make their own ad hoc arrangements to share data and write joint papers on topics of mutual interest.

We have earlier recommended that the most efficient way to document the development of instruments for use on space project payloads would be by saving the files of proposals and grant administration in the offices of discipline scientists at NASA Headquarters.

Decisions to archive papers of scientists who have served as PIs or members of their teams for space science projects should be made by archivists at their home institutions on the basis of their overall careers. If scientists have regularly led or participated in important research, the records of their participation are well worth saving.


[Table of Contents]
Listed below are summary descriptions of the eight multi-institutional collaborations we selected as a sample for study.

COCORP adapted the technology of seismic reflection profiling, which oil companies routinely used for exploring the earth's crust to feasible drilling distances, to probe greater distances into the crust. By varying the parameters at which the instrumentation sends out and receives signals, and by processing the data to bring out signals from deeper structures, COCORP (and seismic reflection profiling projects in other countries) have made sub-surface structure a "third dimension" to surface geology. The project is managed and run out of Cornell University. It has been oriented towards exploration of the continental crust and discovery of significant sub-structure focused study of particular sites in combination with other techniques. However, geologists interested in particular locales have successfully recommended their sites to COCORP and assisted in the interpretation of the data COCORP acquired.

Subcontracting: Private firms had made a business of using seismic reflection profiling to acquire data for oil companies. COCORP has contracted for the services of these firms instead of developing its own instrumentation. COCORP did acquire self-sufficiency in data-processing and analysis; some computers that Cornell purchased for COCORP were customized by the manufacturer.

Funding: Funding for COCORP was provided by the National Science Foundation.

Institutions Involved: Cornell University, Princeton University, University of Texas at Houston, University of Wisconsin.

The Ocean Drilling Program, and its predecessor the Deep Sea Drilling Project, retrieve cores from the ocean floor using a dedicated research vessel that took advantage of the design, navigation features, and safety equipment that enabled oil companies to explore for off-shore oil. By drilling in ways that minimally disturb the cores, the project provides participating scientists with data relevant to earth history and tectonic structure. The Scripps Institution of Oceanography of the University of California, San Diego managed DSDP from its inception in 1968. The Ocean Drilling Program, initiated in 1985, is managed scientifically by the Joint Institutes for Deep Earth Sampling (JOIDES) and fiscally by the Joint Oceanographic Institutes, Incorporated (JOI), which contracts to the Texas A&M Research Foundation for the maintenance and operation of the ship. The DSDP/ODP has been international in scope since 1975.

The work is conducted in a series of "legs," the ship's track along a selected path. The American Institute of Physics (AIP) Study focused on Leg 85, which was a classic strati-graphic-palaeoceanographic program, and Leg 133, which explored Australia's Great Barrier Reef.

Subcontracting: The research vessel for DSDP (Challenger) was built by Global Marine Corporation which, unlike other bidders on the contract, proposed building a new vessel (rather than retrofitting an existing vessel). The subcontract involved the design of a scientific workplace.

Funding: Funding for the Deep Sea Drilling Project was provided by the National Science Foundation. Funding for the Ocean Drilling Program is provided by the National Science Foundation; Canada/Australia Consortium for the Ocean Drilling Program; Deutsche Forschungsgemeinschaft (Federal Republic of Germany), Institut Français de Recherche pour l'Exploitation de la Mer (France), Ocean Research Institute at the University of Tokyo (Japan), Natural Environment Research Council (United Kingdom), and the European Science Foundation Consortium for the Ocean Drilling Program (Belgium, Denmark, Finland, Iceland, Italy, Greece, the Netherlands, Norway, Spain, Sweden, Switzerland, and Turkey).

Institutions Involved in JOIDES: National Science Foundation; JOI, Inc.; JOIDES American member institutions: Columbia University, Lamont-Doherty Earth Observatory; University of California, San Diego, Scripps Institution of Oceanography; University of Hawaii; University of Miami, Rosenstiel School of Marine and Atmospheric Science; Oregon State University; University of Rhode Island; Texas A&M University; University of Texas at Austin; University of Washington; Woods Hole Oceanographic Institution; and JOIDES international participants from the Canada-Australia Consortium, the European Science Foundation, France, Germany, Japan, and the United Kingdom.

Institutions Involved in Leg 85: University of Bordeaux, France; University of California, San Diego, Scripps Institution of Oceanography; City of London Polytechnic, United Kingdom; Hawaii Institute of Geophysics; Massachusetts Institute of Technology; Oregon State University; University of Rhode Island; Ruhr-Universitäet Bochum, W. Germany; United States Geological Survey; The University, Newcastle upon Tyne, United Kingdom; Yamagata University, Japan.

Institutions Involved in Leg 133: Australian Bureau of Mineral Resources, Geology and Geophysics; University of British Columbia, Canada; British Geological Survey; Columbia University, Lamont-Doherty Earth Observatory (formerly Lamont-Doherty Geological Observatory); University of Edinburgh, Grant Institute of Geology, United Kingdom; Geological Institute, ETH-Center; University of Hawaii; I.A.G.M. University Granada-C.S.I.C., Spain; Institut und Museum für Geologie und Paläontologie, W. Germany; Institut für Paläontologie, W. Germany; Institute of Oceanography, National Taiwan University, China; Kanazawa Institute of Technology, Japan; Laboratoire de Sédimentologie, Université Claude Bernard, France; University of Miami, Rosenstiel School of Marine and Atmospheric Sciences; Ocean Drilling Program, Université de Provence, Centre de Séd. et Paléontologie, France; Texas A&M Research Foundation; Rice University; University of Rhode Island; Florida State University.

The Greenland Ice Sheet Project, established in 1978, was a small collaboration among Danish, Swiss, and U.S. scientists, who established that analysis of ice cores from Greenland's glacier could yield a relatively high-resolution climate record. In 1988, the second Greenland Ice Sheet Project was initiated, with engineering management provided by the Polar Ice Coring Office (PICO) and science management provided by the University of New Hampshire. Participating scientists examined a 3000 meter ice core drilled to the base of the ice at its thickest, least disturbed point and into the bedrock. This large collaborative project was conducted in conjunction with a European project (GRIP), which drilled through the same ice sheet 30 kilometers (10 ice-thicknesses) away—a distance that will make statistical comparisons of the cores convenient. By examining physical properties of the cores and the composition of the air bubbles trapped in the cores, participating scientists are constructing time series on which to base inferences concerning the dynamics of climate change.

The AIP Study focused on GISP2, but also interviewed individuals who had served on GISP and GRIP.

Subcontracting: Most instrumentation used to analyze cores or air bubbles was purchased commercially, modified for the particular use, attached to laboratory-constructed feed systems, and operated semi-automatically through the use of laboratory-written software. Almost all the instrumentation, however, was not developed or bought solely for GISP.

The ice drill, which was designed and built by PICO under contract to the National Science Foundation, is of technological interest because of its unprecedented (5.2-inch) diameter. PICO also researched an environmentally safe drilling fluid to fill the hole. The drill itself was put together almost entirely from commercially available parts.

Funding: Funding for GISP1 was principally from the National Science Foundation with the Danish and Swiss governments supporting the laboratory analyses of their scientists. All of GISP2 was funded by the National Science Foundation.

Institutions Involved: University of Bern; University of California, San Diego, Scripps Institution of Oceanography; Carnegie Mellon University; Cold Regions Research and Engineering Laboratory; Columbia University, Lamont-Doherty Earth Observatory (formerly Lamont-Doherty Geological Observatory); University of Copenhagen; University of New Hampshire; Pennsylvania State University; Polar Ice Coring Office (currently contracted by the National Science Foundation to the University of Alaska, Fairbanks); University of Rhode Island; St. Olaf College; State University of New York—Buffalo; University of Washington.

Established in 1984, the Incorporated Research Institutes for Seismology, a large consortium of academic institutions, provides academic seismologists with cost-effective access to the advantages of digital technology. IRIS has been organized around three components: the Global Seismic Network (GSN), which installs very-broad-band seismometers in network stations to enable scientists to monitor broad frequency and amplitude bands of seismic activity; the Program for Array Studies (PASSCAL), which loans digital, portable seismometers that enable scientists to design more varied and sophisticated active-source seismic experiments; and the Data Management Center (DMC), which provides scientists with access to and utilities for processing the data collected with IRIS instrumentation. The IRIS consortium also works with the Defense Advanced Research Projects Agency (DARPA) to set up a seismic monitoring program in the former Soviet Union.

Subcontracting: There are two subcontracts at the heart of IRIS: one for GSN and one for PASSCAL. University geophysicists served on committees that wrote specifications for the instruments and selected contractors to provide the instruments. The GSN committee selected Quanterra of Harvard, Massachusetts, which a Harvard-trained seismologist founded to manufacture an innovative analogue-to-digital converter he had developed for broad-band seismometry. The PASSCAL committee selected a young company, Refraction Technology (REFTEC) of Dallas, Texas. The Quanterra contract was the subject of the case study which is reported in Report No. 2, Appendix B: "The Development of Very-Broad-Band Seismography: Quanterra and the IRIS Collaboration."

As the price of the analog-to-digital converter built for the GSN seismometers fell, it became feasible to incorporate them into the portable seismometers, which REFTEC had designed in modules that were readily upgraded. As a result, the two types of seismic instruments are more similar than initially expected.

Funding: IRIS is funded by the National Science Foundation and the Defense Advanced Research Projects Agency.

Institutions Involved: IRIS, United States Geological Survey, and the 50 American academic institutions which comprise the membership of the IRIS consortium.

ISCCP samples and processes the data from the several weather satellites that provide weather forecasters with daily pictures of regional cloud coverage; the project generates global statistics on the distribution and properties of clouds. Cloud-radiation feedback (along with global ocean circulation) has been one of the two physical processes for which climate modelers lacked measurements that were commensurate with the sophistication of their models. ISCCP, which used only data from satellites that governments had already put in place and were committed to maintain, was the first project sponsored by the World Climate Research Programme (WCRP). Managed by the Joint Science Committee of WCRP, the project challenged participants to develop an algorithm that could capture cloud characteristics against a global set of backgrounds. The work on the algorithm was conducted in the United States at the Goddard Institute for Space Studies.

Subcontracting: Because ISSCP reprocesses the satellite images already being created for daily weather forecasts, it has not had to issue any subcontracts to industry.

Funding: Funding for ISCCP is provided by NASA and the National Oceanic and Atmospheric Administration with contributions from the Japanese and European weather agencies.

Institutions Involved: Colorado State University; Goddard Institute for Space Studies; GKSS Institute for Physics, Geesthacht, Germany; Universität zu Köln; Laboratoire de Météorologie Dynamique, CNRS, France; National Center for Atmospheric Research; National Severe Storm Center; Florida State University; University of Wisconsin.

The Parkfield Earthquake Prediction Experiment has concentrated the deployment of seismic and other geophysical instrumentation at a site that seismologists predicted would experience a major earthquake (of magnitude six or greater) before 1993. In addition to acquiring data on geophysical events and processes preceding an earthquake and on the utility of individual instruments or combinations of instruments to indicate an impending earthquake, the scientists also agreed to issue short-term warnings of anticipated earthquakes according to a protocol they designed with public officials of the state of California. Managed out of the USGS office in Menlo Park, California, Parkfield has been the first officially sanctioned attempt at short-term earthquake prediction in the United States.

Subcontracting: Most of the equipment used in this project was the product of longer-standing efforts to develop seismological measuring techniques. In at least one instance, that of a two-color laser system for precise surveying, a Parkfield instrument was built by a company, Terra Tech, begun by academic researchers to develop and produce their instrumentation ideas.

Funding: The Parkfield Earthquake Prediction Experiment was funded by the United States Geological Survey, the California Division of Mines and Geology, and the National Science Foundation.

Institutions Involved: University of Alaska; University of California, Berkeley; University of California, Riverside; University of California, Santa Barbara; Carnegie Institution of Washington; University of Colorado; Duke University; University of Queensland; Stanford University; United States Geological Survey—Menlo Park.

Initiated in 1981, the Warm Core Rings project used three research vessels to sample repeatedly, in coordinated fashion, selected meso-scale eddies that separate from the Gulf Stream off the eastern coast of the United States and move separately through the surrounding water. The integrity of these rings and the ability to track them by satellite allowed scientists from all the oceanographic disciplines to study ocean processes in a relatively controlled setting and in conjunction with others studying related processes. A Science Management Office at Woods Hole Oceanographic Institution managed this project, with assistance of an Executive Committee comprised of participating scientists from each of the disciplines represented in the project.

Subcontracting: Most of the instrumentation used in this project was the product of longer standing efforts to devise means to sample and analyze ocean waters. In one instance, a principal investigator used the project to develop and introduce a new sampling technique. However, there does not seem to have been a subcontract involving innovative engineering that had significance for the whole collaboration.

Funding: Funding for the Warm Core Rings Project was provided by the National Science Foundation; however, the National Oceanic and Atmospheric Administration, the Office of Naval Research, and NASA had independently provided the funding that had enabled several participants to develop the techniques they used in Warm Core Rings.

Institutions Involved: Bigelow Laboratory; University of California, Santa Barbara; Columbia University, Lamont-Doherty Earth Observatory; Dalhousie University; Goddard Space Flight Center-Wallops Island; Harvard University; University of Maryland; Massachusetts Institute of Technology; University of Miami; NASA; National Marine Fisheries Service; Naval Post Graduate School; Nova University; Oregon State University; University of Rhode Island; Texas A&M University; Woods Hole Oceanographic Institution.

Global ocean circulation (along with cloud-radiation feedback) is one of two major physical processes for which climate modelers lacked measurements that were commensurate with the sophistication of their models. WOCE, which is managed by the International WOCE Science Steering Committee on the authority of the World Climate Research Programme, has mobilized several approaches to the cause of creating model-relevant data. It consists of three "core projects"—a hydrographic survey of the Pacific Ocean, a hydrographic survey of the Indian Ocean, and a set of process experiments in the Atlantic Ocean—along with several other measuring techniques, including deployment of innovative floats and construction of an altimeter-equipped satellite to measure sea levels. The United States WOCE Science Steering Committee, which oversees a United States WOCE Office at Texas A&M University, chose to make the hydrographic surveys its top priority. The AIP interviews have concentrated on the hydrography.

Subcontracting: WOCE provides interesting material for a study of the private-public boundary in instrumentation. To analyze the carbon-14 content of small samples of water, the NSF let a contract for construction of an accelerator mass spectrometer to the Woods Hole Oceanographic Institution. All collectors of water samples for carbon-14 analysis have been required to send samples to the Woods Hole spectrometer rather than perform analyses in their home laboratories. The floats (which stay submerged at a specified depth for an extended period and then rise to the surface to signal their positions to a satellite) that WOCE has deployed were purchased from Webb Research, which developed them without government funds in order to control patent and proprietary issues.

Funding: American participation in the World Ocean Circulation Experiment is funded by the National Science Foundation, the National Oceanic and Atmospheric Administration, the Office of Naval Research, NASA, and the Department of Energy.

Institutions Involved: Bedford Oceanographic Institute, Canada; University of California, Scripps Institution of Oceanography; Columbia University, Lamont-Doherty Earth Observatory (formerly Lamont-Doherty Geological Observatory); Goddard Space Flight Center; Institute for Oceanographic Sciences, England; Massachusetts Institute of Technology; University of Miami, Rosenstiel School of Marine and Atmospheric Sciences; National Center for Atmospheric Research; University of New Brunswick; Texas A&M University; Oregon State University; Woods Hole Oceanographic Institution.

[Table of Contents]
This essay serves two overlapping purposes. First, it discusses those aspects of multi-institutional collaborations in geophysics that are most important to generating or locating documents of likely interest to historians of science and technology. In this sense, it provides the empirical perspective necessary for archival analysis and appraisal guidelines to be well grounded in the realities of recent research. Second, it offers observations on where the institutional framework of the government-funded, multi-institutional collaboration has seemed to affect (or leave undisturbed) the social relations and acquisition of expertise that are necessary for the pursuit of scientific research. In this sense it provides a preliminary perspective on social patterns and changes within a scientific community.

The foundation for this essay is 106 interviews with participants in eight multi-institutional collaborations in geophysics. All the collaborations began operations between the late 1960s and the late 1980s. In our choice of projects to study, the AIP staff and consultants consciously tried to cover a range of features: projects that involved a variety of institutions, projects supported by a variety of patrons, internationally and nationally organized projects, seismological, climatological, and oceanographic projects, and smaller and larger projects. In our choice of interviewees, the AIP staff sought to cover all the types of people who might be vital to the documentation of scientific work, from administrators at funding agencies to graduate students at university departments. The strategy was to learn a little about a lot in the belief that broad exposure was essential to producing sound recommendations for archivists and policy makers. We also aimed to provide a context that other scholars could compare with their own case studies, which will likely remain the dominant mode of inquiry into "big science."

The structure of the essay is built around its central finding. We propose two categories in which to classify the projects we studied. Three of the eight projects [DSDP, COCORP, and IRIS][22] sought to import and adapt for academic geophysics techniques that had proven their mettle in industrial field work or other scientific fields. We call these "technique-importing" projects. The other five sought to deploy geophysical techniques to study a site that offered a strategic window into poorly understood processes [Parkfield, Warm Core Rings, GISP2] or a global phenomenon that outstripped the resources of any individual institution to capture [ISCCP, WOCE]. We call these "technique-aggregating" projects. Part I below employs quasi-narrative conceptsCthe projects' formation, organization, and field work, which roughly correspond to their beginning, middle, and end—to describe the most important distinctions between the two types of geophysics projects and their common features. Part II uses the same concepts to describe the common features and variability within each category of projects. Part III returns to the more general level of content, but the descriptions are organized around the non-narrative functions of funding, internationalism, careers, and communication in order to highlight the structural similarities and differences among the cases. There are overlaps in the content among the parts; the fuller exposition is in the first two, which contain some material not found elsewhere.


[Table of Contents]
Geophysicists do not, as a rule, enjoy the services of institutions where scientists and engineers routinely unite to define projects for the broader community to carry out. Only one of the eight geophysics projects we studied [GISP2] had project managers from an institution that existed to manage other people's projects [Polar Ice Coring Office at U. Alaska], and that institution did not include scientists. Geophysical research institutes, including specifically oceanographic institutes and government research centers, come closest to combining scientific and technical personnel. But they principally promote the research of their own staffs, not of a greater community.

Geophysicists perceive many and varied causes for the lack of institutions uniting academic, community-serving geophysicists with engineers. In the United States, the interests of the national security agencies concerned with discriminating earthquakes from nuclear weapons tests, and oil companies concerned with discovering accessible oil reserves, have absorbed the supply of engineering talent in more or less proprietary concerns. Government agencies pursuing geophysical research to understand geological hazards have not made external researchers a prominent part of their programs. Many geophysicists, busy with significant micro-level experiments, have not demanded institutions to deal on a regular basis with larger-scale studies and their engineering complexities.

As a result, geophysicists have scrambled to find or simply deferred finding the engineering advice they would need to carry out multi-institutional projects. In some projects, instigating scientists in the United States made connections with engineers through the staff of the National Research Council [DSDP], through meetings of a scientific society [COCORP], or through workshops [IRIS]. But in five of our eight cases, including all that formed as international projects, scientists convinced themselves that they could handle the managerial and engineering burdens of their scientific ambitions with only informal engineering help.

Within the United States, early supporters or ultimate targets of project instigators were the United States Geological Survey (USGS), the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), the Office of Naval Research (ONR), the National Academy of Sciences (NAS), and several programs of the National Science Foundation (NSF). The World Meteorological Organization (WMO), the International Council of Scientific Unions (ICSU), and the several research agencies of European and Asian national governments have participated in forming international projects. These institutions have similar terminology for their internal structure. Funding agencies are divided into divisions which are divided into programs which are run by "program managers," who are usually scientists. Project instigators either send unsolicited proposals to the relevant program manager or convince the program manager to issue a request for proposals or an announcement of opportunity on behalf of a project. The scientist responsible for supervising the proposed research has the title "principal investigator" (hereafter, PI). Program managers usually request written reviews for a proposal from several of the PI's peers. Finally, the program manager meets with a review panel of scientists to determine which proposals to support in light of the reviews and the program's budget. A recommendation to support an unusually expensive project will be reviewed higher in the agency's hierarchy; technique-importing projects always received such scrutiny.

Almost all our selected projects formed because scientists at geophysical research institutes perceived opportunities that required more resources or inspired more interest than could be channeled through their home institution. All the projects we studied included among their instigators scientists who either had previously worked for industry, or worked in research institutes with substantial engineering resources, or had acquired extensive logistical experience in previous research. This statement covers people in too many institutions with too many patrons to do more than call attention to the rarity of other situations. In only one instance [COCORP] did a scientist in a university department assume the burden of instigating a project, and his first act was to sound out industrial scientists with appropriate experience. There were no instances of an industrial scientist instigating one of these projects.

In seven of the eight projects we studied [all but WCR], arrangements to organize scientists and spend money were preceded by a statement from a board of the National Academy of Sciences or some other advisory panel asserting that the United States should give priority to the topics that these projects addressed. However, the impression of instigators was that such statements never insured the formation of a project, because program managers at funding agencies rarely interpreted the statements as a license for action while the Academy itself has seemed, at best, a cumbersome organizer of research projects. Endorsements from advisory bodies have nevertheless been useful when a proposal has been so expensive that program managers needed higher levels of clearance or support in order to fund the project.

To form technique-importing projects, would-be instigators had to demonstrate that the scientific opportunities merited a leap beyond incremental improvements in technique. They also had to create an organizational framework that balanced the desire for central project administration against each institution's desire for participation in project governance. As a rule, the first was a socially straightforward matter of obtaining modest funds for a trial. But the latter involved difficult negotiations among prominent representatives of competing institutions. In all cases, the instigators ended up creating a consortium whose administrative headquarters could be part of one of the competing institutions, but whose formal governance and substantive policy-making powers were vested in committees whose members were selected and participated with no special regard for the headquarters institution.

To form technique-aggregating projects, would-be instigators had to demonstrate that the whole would be greater than the sum of its several experiments, while creating an organizational framework that respected the autonomy of each PI. The latter was socially simple. The participating scientists set up a "Science Management Office" (hereafter SMO) that was limited to coordinating proposal writing and project logistics; with its powers so circumscribed, there was rarely a dispute over which of the participants should direct it. The demonstration of scientific value was more complex. To keep project logistics feasible, would-be participants had to compromise on what they would have ideally proposed as individuals. But all knew that the standard practice of funding agency reviewers and managers was to judge individual proposals against the standard of recent accomplishments in their experimental specialties. Workshops were the usual forum in which instigators sought a set of participants that was sufficiently large to indicate general interest, sufficiently small to be manageable, and sufficiently prestigious to overcome reviewers' doubts about individual proposals.

[Table of Contents]
Despite the absence of permanent institutions that make a business of forming and managing geophysics projects, each of the two types of geophysics projects has produced terminology whose meaning, while varying somewhat across projects, does correspond to similar organizational structures. Among the technique-importing projects, it was common for interviewees to speak of "consortia." These were responsible for appointing standing committees, which advised or directed project executives, who managed the projects' people and equipment. Among the technique-aggregating projects, it was common for interviewees to speak of SMOs that were funded to coordinate logistics for the PIs, who formed a Science Working Group (hereafter, SWG).

Technique-importing projects in their formation all struggled with whether importing a technique must result in the promotion of one institution. A consortium structure helped these projects through that struggle in one of two ways. It either created governing committees in which the involved institutions could participate as equals, even when one among them was responsible for daily project administration [DSDP and COCORP]. Or it created a new, freestanding entity in which the involved institutions could vest responsibilities that they did not want any extant institution to enjoy [IRIS]. Either way, the collaborating institutions could have equal says in how the imported technique would be deployed.

Technique-aggregating projects found it advantageous to manage themselves through a made-for-the-purpose SMO because participating scientists wanted to maintain negotiating room over the priorities given to individual accomplishment and collective coherence. The SMO was limited to managing logistics, so its PI was less a powerful decision-maker and more a facilitator of ongoing negotiations in the SWG over what activities had to be subject to project discipline, what activities it would be desirable to subject to project discipline, and what activities were best left to individual PIs. The result has been projects that left some unhappy—even, in a few instances, totally disaffected from a project—but enabled most to fulfill their professional ambitions.

The two types of geophysics projects have little in common organizationally. The technique-importing projects were created to apply the technique to many objects of curiosity, and thus have needed to operate far longer than the technique-aggregating projects. Therefore, the technique-importing projects have adopted the more secure and formal structure of a consortium, whereas participants in the shorter technique-aggregating projects have preferred a less formal structure that maximizes their sense of self-governance.

[Table of Contents]
"Experiment," in the context of technique-aggregating projects, consistently refers to the activities that a PI oversees in order to produce data and findings. Usually PIs' teams have consisted of people from their own institutions, though occasionally multi-institutional experiment teams perform an experiment within technique-aggregating projects. Usually PIs have either taken their equipment into the field or brought back samples, such as glacial ice or ocean water, to analyze in their laboratories, but occasionally, PIs used communal instrumentation in lieu of acquiring their own.

"Experiment" and "team" in the context of technique-importing projects have a more ambiguous meaning. These projects have usually provided large amounts of instrumentation for several researchers from different institutions to use at a target endorsed by a consortium's advisory committee. Because these researchers independently qualified to be part of the research party on the strength of their individual expertise, they seem analogous to PIs with their technicians, students, or postdocs as their teams. Yet because these researchers collectively used the project's instrumentation with one or two of them designated leaders, they seem analogous to team members with the research party's leader(s) the PI(s). The latter usage will be followed here. When numerous independent scientists joined to investigate a designated target with an imported technique, they adapted to the roles and tools provided by the consortium; when numerous independent scientists formed a technique-aggregating project, they created an SMO that suited their needs. Equating the party leaders with the PIs and the party members with team members better reflects the level of initiative and sense of intellectual investment among the participants in the two types of projects.

[Table of Contents]
The most difficult obstacle to forming a technique-importing project was defining an agreeable social structure for producing a large-scale proposal, and then, if funded, implementing it. Two of the three technique-importing projects we studied [DSDP and IRIS] owed their origins to discussions of well recognized research limitations among prestigious scientists who routinely met for conferences or committee work. Thus these projects had several possible sources of leadership, with everyone having their own sensibilities on the field's directions and their institution's place in it. The third project [COCORP] had a clearly leading instigator whose insights and institutional needs were critical to the project's conception, but he still sensed the need for involving comparable colleagues in order to have a social base of support that matched the scale of his hopes.

Instigators from different institutions always realized that they had to unite behind a single proposal and not put the NSF in the position of judging competing large proposals. Yet they resisted sharing administrative responsibility for any of these projects, because they saw no virtue for the project's science in institutional inter-dependence or division of technical labor. In two of the cases [DSDP and COCORP], the scientists believed the technique could best be imported if one institution was charged with obtaining and maintaining the project's equipment and personnel. Tension over whose institution would have that role and what meaning participation would have for the others never entirely dissipated. In one project [DSDP], leadership fell to the institution that was least objectionable to the others. The project's success led to a change in lead institution. In the other, the institution most invested in the project seized leadership, but the project's success stimulated political pressure that has forced the project to coordinate its operations with other institutions. Instigators for both projects stressed that project advisory committee(s), which included members from the other institutions and in which members of the lead institution held no privileged position, insured that research goals were formulated on a multi-institutional basis. In the third case [IRIS], inequalities among the participating institutions forced instigators to propose a freestanding organization that enabled the project to "level the playing field" for the member institutions. The instigators solved the problem of who would run this new organization by recruiting informed but less partisan scientists for executive positions.

In all three cases, instigators had to rely on their own ingenuity and persuasive powers to create an institutional framework. Although the National Academy of Sciences had called attention in a report to each technique's potential, the NAS was not helpful with the problems of organizational politics. In two cases, the NAS did nothing beyond issuing its report [COCORP and IRIS]. In the third, the NAS had tried to organize a project and failed; some of the individuals involved in the failure then successfully developed other institutional arrangements for a project that imported the technique [DSDP, whose techniques originated in the abortive Mohole project]. In general, instigators believe the NAS to be too passive an advocate for research opportunities to push funding agencies to organize projects, and an inappropriate institutional base for exploiting research opportunities.

The proposals for projects to import techniques into geophysics were always out of scale for the particular NSF program handling them. NSF program managers found NAS endorsements of considerable utility, because such proposals had to be reviewed at the NSF's division and director's level and go before the National Science Board for approval. The endorsements, program managers believed, were evidence to their administrative superiors that the proposal addressed a genuine research opportunity. With technical feasibility and scientific desirability established, agency discussions shifted from the substantive to the procedural. In one instance [DSDP], a proposal raised difficult issues of accountability for the expenditure of funds, but the accountability issues had no bearing on discussions of what data the project would obtain and what instrumentation it would use to obtain the data.

[Table of Contents]
The instigators of all three technique-importing projects formed consortia to propose and manage the projects. In one instance [IRIS], the instigators consciously modeled the consortium's by-laws on an earlier project's [DSDP]. But in all cases, the consortia formed "standing committees" of scientists from outside the host institution to review or formulate policy for the major scientific and technical issues, vested powers of governance in an "Executive Committee" comprised of representatives of member institutions, and placed daily administration of project activities in the hands of project executives. These similarities suggest to us that technique-importing projects all formed consortia because that structure seemed compellingly appropriate, not because an earlier set of by-laws could be conveniently adapted.

A. Scope of Consortium Standing Committees in Technique-Importing Projects
[Table of Contents]
All three technique-importing projects formed standing committees to recommend or set strategies for the project. Two of the three projects [COCORP and IRIS] have had more than one standing committee with each responsible for a separate aspect of the project; one [DSDP] has had a single committee overseeing several subcommittees. The standing committees formally reported to the consortium Executive Committee, discussed below, which hired project executives. But in most cases, the project executive responsible for implementing the committee's advice was the committee's primary contact.

In two of the three technique-importing projects we studied [COCORP and DSDP], the number of institutions formally in the consortium was small compared to the number of institutions with interested scientists. In these cases, the consortia have drawn on the larger national or international scientific communities for members of their standing committees and subcommittees. This practice began as and remains these projects' principal technique for acquiring a broad enough base of participation to secure support for their continuance. In the third case, the project began with a sizable number of consortium members, and most of the remaining institutions with scientists interested in the project became paying consortium members not long after the project's inception. Thus any issue of whether standing committee members could be drawn from outside the staffs of member institutions became moot.

A more difficult issue than institutional eligibility requirements for standing committee membership has been whether a standing committee member can simultaneously serve on the Executive Committee. The instigators of a consortium with few initial members [DSDP] consciously decided not to have standing committee members serve simultaneously on the Executive Committee (except ex officio for purposes of coordination) in order to sever issues of research priorities handled in the standing committees from issues of institutional politics handled in the Executive Committee. Participants in the project have uniformly hailed this arrangement for keeping the inter-institutional antipathies apparent in the project's formation from carrying over into discussions of the research agenda. But there was one instance, in another project [IRIS], where the power gained by an individual who served at both levels may have made a useful counterweight to the consortium's most powerful institution.

Whatever the precise relations of standing to executive committees, it was the standing committees that tackled the issues that are most important from a historical perspective. Standing committees have deliberated on the general designs and specifications for the projects' major pieces of instrumentation; they have kept academic scientists abreast of the latest in industrial techniques of relevance to scientific pursuits; and they have reviewed or themselves created the plans to deploy the project's instrumentation. The deliberations on instrumentation have been contentious when the consortium's choices affected the viability of instrumentation research programs in academic or industrial laboratories. However, standing committee decisions appear never to have been challenged as the product of favoritism or "insider trading." The judgements of industrial experts likewise appear never to have been challenged.

Most important for the long-term significance of standing committees has been their role in research planning. In all three of the technique-importing projects, standing committees were at least temporarily influential in determining the research done with the technique. In the most compelling case [DSDP], the standing committee and its subcommittees became an elaborate system for generating and evaluating research objectives. The standing committee set forth a general area for the year's research; the several subcommittees would try to build consensus among their members for favored targets within the general area and try to cut deals with each other for support of each other's pet targets; and the chairs of the subcommittees would present their "packets" of scientific objectives to the standing committee in a meeting to make decisions on what data to collect. (These meetings have become so important that the consortium has taken to taping them in order to demonstrate, if challenged, the reasoning and procedures behind the standing committee's selections.) The scientists who then oversaw the actual acquisition of the data have usually been either members of the subcommittee that succeeded in promoting its agenda or scientists that the subcommittee recommended for this task to the project's administrators.

B. Scope of Consortium Executive Committees in Technique-Importing Projects
[Table of Contents]
Executive committees have been important at the inception of all technique-importing projects as forums for debating and establishing project boundaries and ground rules. Their post-inception importance has varied, depending on the arrangements through which the project would acquire and make available its research instrumentation.

The project whose executive committees have retained the most importance [DSDP] has monopolized (even though it has not technically owned) the research instruments. Because research scientists had no alternative to working through the project, the Executive Committee was occasionally obliged to decide on issues that split the standing committee—for example, the value of spending money or project engineers' time on adding another research tool to the stable of project instrumentation. For the same reason, the Executive Committee has been obliged, especially when termination or recapitalization became the project's foreseeable fate, to consider adding consortium members (and thus increasing the breadth of scientific interests serving on the standing committee) to acquire fresh capital.

In another case, a project's Executive Committee completely disappeared [COCORP]. At its inception the consortium had decided to import the industrial technique by contracting with firms that were already providing data-acquisition services to industrial clients. This eliminated the issues that made the consortium's Executive Committee important. There was no need to deliberate on the value of acquiring improved instrumentation, because the contractors were already doing so to remain competitive in their quest for industrial clients. There was no need to reconsider the number of consortium institutions or to recapitalize the project, because the project did not impose expensive, irreversible technical idiosyncracies on a contractor. Consequently, the Executive Committee faded away as the consortium folded into the institution whose scientists had been the PIs.

Intermediate between these extremes has been the most recent project [IRIS], which contracted out for the design and construction of instrumentation that met the consortium's specifications. Like the project with an Executive Committee of enduring importance, this project's future will probably generate issues—e.g., recapitalizing the instrumentation—that will keep the Executive Committee significant. But like the project whose Executive Committee disappeared, this project's future does not include issues of consortium expansion. Not only do most academic institutions in geophysical research already belong, but they can use their own money to buy instruments from the consortium's suppliers, and the consortium no longer funds or plans the research done with consortium instruments.

C. Scope of Project Staff in Technique-Importing Projects
[Table of Contents]
All the technique-importing projects had a geophysicist as their chief administrator, though the title of that position varied: "chief scientist" in DSDP, "director" in COCORP, and "president" in IRIS. They all oversaw engineers, who were responsible for providing technical services to research scientists. But the character of the services the engineers provided and the further duties and importance of the chief administrator varied, like the scope of the executive committees, with the social and technical means for data acquisition.

The chief administrator held considerable administrative authority, but little intellectual authority, over the project that monopolized a unique resource [DSDP]. Intellectual authority resided with the standing committees, which identified and justified targets for research, and then with the scientists charged with acquiring the data, who worked out with a designated project engineer the tactics for acquiring the data. However, the chief administrator has had much influence and the final power of approval over who would lead and serve on the scientific party to collect data at the designated targets. Furthermore, the chief administrator has developed formal rules for formatting data, curating samples, and publishing both preliminary, descriptive findings and a volume of refined, analytic findings. A staff scientist has worked with each scientific party to insure that they follow rules and to serve as liaison between the scientific party and project staff and administration.

The chief administrator held far more intellectual power and at least equal administrative power in the project that contracted with firms that were already acquiring data for industry [COCORP]. The administrator participated in the standing committee's discussions of research targets and was a mentor to some of the scientists hired for the project. The administrator designated who among the project's staff scientists would work out the tactics for acquiring data with the project's "executive director," who arranged the contracts with the firms with the instrumentation. The chief administrator and executive director have not needed formal rules to enforce project discipline; their leadership in creating the project and their immediate availability to project scientists insured their stamp on the project.

The least powerful chief administrator has led the project that contracted out for construction of consortium instruments [IRIS]. The administrator was initially important for choosing professional engineers, who became central to developing a consensus within the standing committees over the instrumentation. But once the engineers and the standing committees were working well together, the chief administrator had fewer intra-project duties, especially because this project eventually chose to leave the determination of research targets to the government funding agency. Turnover in the administrator position has been high, to the distress of the staff (which would like stability) and to the distress of Executive Committee members, who get drawn into problems because the administrator has always been new at the job. However, an ex-administrator was dismayed at his difficulties at returning to research after his stint in the position. That does not bode well for making such a position attractive in the long run to any scientist with research ambitions.

D. Scope of Funding Agency in Technique-Importing Projects
[Table of Contents]
Funding agency personnel were important in defining the terms of consortium formation. After that, so long as a consortium could settle its issues within its structure and without asking for additional money, the funding agency and its program managers were passive towards intra-consortia issues. Such was the case in two of three technique-importing projects we studied [DSDP and IRIS]. In the third, the consortium whose Executive Committee disappeared [COCORP], the funding agency was lobbied by scientists who resented the project's insularity. Eventually, the program manager, with the support of his advisory committee, stopped funding the project with block grants. These block grants had left the project staff to decide which research targets to pursue (subject to advice of the standing committee). Instead, the project has had to propose the study of particular research targets in competition with other relatively expensive geophysics proposals. Project staff have found their proposals stand a better chance when they fit with other institutions' plans to study a particular region. In effect, this technique-importing project has been evolving towards becoming a component of technique-aggregating projects.

[Table of Contents]
Experiment teams in technique-importing projects have usually been multi-institutional collections of researchers who fit into roles that the consortium defined for them through its design and acquisition of instrumentation. Teams have been created in varying ways and with varying forms of leadership. But their common purpose has been to use consortium resources to acquire data from sites that the consortium or funding agency have designated.

A. Origins of Experiments in Technique-Importing Projects
[Table of Contents]
The consortia's standing committees and subcommittees were the most common and important forums for discussions of possible experiments. Block grants that provided for the acquisition of data made these committees directly influential in the selection of research targets. The structure of the committees and the rules governing membership have been major managerial issues for technique-importing projects[23].

The primary social hurdle to initiating an experiment has been to convince someone of influence to sponsor an initiative. Once early research targets were successfully examined, demands to investigate other targets proliferated, technical failure became less tolerable, and the requirements for justifying a particular target for research increased. Initially, the brainstorming of the initial enthusiasts for the project, or hints about good sites from outside scientists, were sufficient to set in motion the necessary arrangements for performing an experiment. Over time, one project spawned an elaborate committee structure to consider targets and turned the creation of an experiment into a lengthy undertaking in science politics [DSDP]. Another project used postdoctoral scientists to scope out possible targets and present possible experiments to the standing committee [COCORP]. Only in the third case, in which the consortium passed responsibility for selecting research targets back to its funding agency, was the need for intra-project advocacy obviated.

The initial source of inspiration for experiments was simply the interests of scientists in particular research targets and processes. These projects have endured for decades on the breadth of their technique's applicability and the many targets that attracted scientific interest. Project administrators all consciously decided to charge their engineers with keeping the project's equipment abreast of the state of the art in industrial circles, which continued to invest far more in the research and development of instrumentation than science consortia could hope to raise. Even this level of effort, however, was sufficient to justify revisiting sites with improved equipment and to stimulate an expansion in the ways scientists imagined deploying instruments.

B. Organization of Experiment Teams in Technique-Importing Projects
[Table of Contents]
Each of the technique-importing projects acquired its instrumentation in different ways. The creation and structure of experiment teams was equally diverse because they followed from the manner in which the project provided instrumentation. The variations are straightforward to describe; whether they represent archetypes that also fit other projects will have to be determined in future research.

The project that monopolized a facility [DSDP] also dominated the creation and composition of experiment teams. The advocates of the experiment in the standing committee structure either became the "co-chief scientists" for the experiment team or found scientists who would take the job. The project's scientist-administrators had established a template of roles for experiment team members to fill, and all scientists recognized there was a political requirement to involve a diversity of institutions. Together the co-chiefs for the experiment and the chief scientist for the project negotiated their way to a team that balanced the co-chiefs' desire to include scientists with whom they were compatible, the project's need to maintain its base of political support, and meritocratic inclusion of scientists who were interested in the experiment but outside the co-chiefs' circles. The consistent result was teams comprised of members who largely had not known each other, much less previously worked together.

Conflicts among these criteria, while present from early in the project, were especially apparent after the project expanded from a national to an international base of support and adopted a quota system to insure that participants in the experiments numerically reflected the financial contributions of the sponsoring nations. One international experiment was led by co-chiefs with a personality conflict; it included an inexperienced person within a co-chief's patronage scope; it forced a doctoral student to participate as a technician because his mentor had used up his patronage powers on more senior colleagues; and it needed last-minute recruitment of an appropriate expert from an appropriate nation. However, the overwhelming sentiment among interviewees was that such problems were minor irritants compared to the benefits of expanding their scientific contacts.

Once a team's participants were selected, the co-chiefs and the staff scientist for the team were responsible for seeing that team members did their jobs to project standards, both in the field and during post-acquisition work. Interviewees concurred that the co-chiefs and staff scientist felt constrained by the project-sanctioned plans except when field conditions or equipment failure forced changes. An exception to this occurred when co-chiefs defied project administrators by refusing to jeopardize further data acquisition by continuing to deploy a new piece of equipment that was not working well. The incident points out how scientists seeking data may lack interest in testing instrumentation innovations that they did not create.

The project that contracted out for data-acquisition services to private firms [COCORP] needed only a few scientists to design and supervise data-acquisition for any given target, because the contractors had attained high levels of competence in their service to corporate clients. The scientific team all came from the project's headquarters institution, and outsiders did not participate in data acquisition and processing. Once the project's younger scientists had acquired experience, its director appointed a "honcho" from the staff for each research target. That scientist worked out a plan for deploying instruments, and—if the target was selected—oversaw the contractor's acquisition of data, the processing of data, and the publication of first results.

The project that bought large number of instruments built to its specifications and withdrew from selecting research targets [IRIS] played no role in team formation. The project's engineers set up centers for the maintenance of the instruments and encouraged instrument users to solicit advice from the maintenance centers' staffs. But scientists with a research target in mind had to form their own teams, using their "know-who" to find others—ideally with complementary technical skills in experiment logistics and data processing yet diverging scientific ambitions for the use of the data they would jointly collect.

All three projects assumed responsibility for archiving data and curating samples for use by the greater scientific community. This activity can serve both historical and scientific interests because materials important to both communities are likely to overlap in records justifying the project's manner of processing data. Participants always acknowledged that the policy was just, but did argue over the amount of project resources to invest into the effort (though never to the point of putting the project in jeopardy). Such arguments seem peculiar to a period when scientists had unequal computer resources and access to data.

C. Organization for Data Acquisition and Analysis in Technique-Importing Projects
[Table of Contents]
In all experiments within technique-importing projects, only team leaders could hope to influence strategy for data acquisition. In the facility-monopolizing project [DSDP], the standing committee set the goals and priorities for each experiment; once logistical constraints were taken into account, not even the co-chiefs had much latitude over the details of acquiring data. Team members requested samples for post-acquisition analysis and any particular equipment they wanted the project to provide; the co-chiefs worked out conflicts among research party members. In the project that acquired its data by contracting with outside firms [COCORP], the "honcho" dominated experiment design. In the instrument-buying project [IRIS], the PIs who formed an experiment team laid out the experiment; their more junior colleagues at most provided (occasionally critical) logistical help with operations until the data were collected.

The members of experiment teams in technique-importing projects always distributed their individually collected data streams to each other, and they always had temporary privileged use of an experiment's data before the project made the data available to outside users. The precise amount of time varied but the principle was the same: those who put time into field work deserved a head start in using the data. Within the teams, the leaders were responsible for mediating among scientists with interests similar enough to spawn duplication of labor or destructive competition. This task was most difficult for the leaders of experiments in the facility-monopolizing project [DSDP], because their teams were larger, more scattered, and comprised of people with no history of working together. Co-chief scientists for these experiments did their best to work out divisions of scientific labor while the team was together in the field.

Dedicated funds for team members to discuss their analyses in post-acquisition workshops existed in only one of these projects [DSDP]. That project supported only one workshop, which most interviewees deemed inadequate for coming to terms with all the possibilities for fruitful research. In other projects, workshops were either intra-institutional [COCORP] or ad hoc [IRIS].

D. Dissemination of Results in Technique-Importing Projects
[Table of Contents]
The experiment teams have usually felt obliged to produce a collective overview of the experiment for publication in a relatively less specialized journal. The projects required experiment teams to prepare their data sets for archiving and distribution as project administrators saw fit. In one case [DSDP], the project itself issued press releases to publicize its activities, as well as publishing a volume of initial results and a volume of initial interpretations about a year later. This project considered these publications essential to enabling outsiders to work with its curated samples after the experiment team's period of privileged use expired. The other projects took all their data digitally. The project that acquired data by contract [COCORP] disseminated the data in raw and processed form, because the contractors' professionalism in "meta-data" documentation enabled outsiders to process the raw data. On the other hand, raw data collected by self-created experiment teams using instruments borrowed from a project [IRIS] were not meaningful to outsiders because of the teams' idiosyncratic ways of documenting influences on the character of the raw data.

Publishing in scientific journals and delivering papers at conferences has otherwise been left to individual initiative. While it has been common for several people within a team to work jointly on a paper on an ad hoc basis, such groups have not been large enough to generate disputes over the length or order of the author list. Interpretive disputes over the same data have not been regulated within projects or teams but become part of the published scientific record.


[Table of Contents]
Instigators of technique-aggregating projects were all motivated by frustration with their previous research in terms of its scale, site, or level of organization. To search for a suitable number of well qualified collaborators, instigators in four of the five technique-aggregating projects we studied mobilized interest through workshops. Ideally, workshops would attract enough scientists to demonstrate general interest but not so many as to make discussions unwieldy. The participants would agree on the justification for a project, establish an upper limit to the number of participants, determine what measurements were essential for the project's existence and what measurements were desirable complements, and determine who would be responsible for eliciting proposals for the various measurements and for proposing an SMO to see to project logistics. The funding agency and its reviewers could then decide whether the several pieces added up to a project of high quality, and if so, which of the desirable measurements to support. (In the fifth project [Parkfield], instigators faced critics concerned with concentrating too much money in a single project but circumvented this by acquiring half the desired funds from another source than their primary patron.)

In practice, our case studies include an instance in which the interest in a project was so high that workshop participants struggled to justify forming two projects instead of one [GISP2], cases in which distinguishing essential from desirable measurements totally alienated individual scientists from a project [WOCE and to a lesser degree GISP2], cases where finding scientists who could produce winning proposals for essential but unglamorous measurements was taxing if not impossible [WCR and WOCE], and cases where ceding administrative responsibilities to a participant created intellectual power that others later resented [ISCCP and to a lesser degree GISP2 and WCR]. Instigators' plans were also upset when a funding agency enlarged a project by adding experiments proposed by scientists who had not participated in the workshops [WCR], or shrank a project when an expected funding increase to the relevant program failed to materialize [WOCE]. Obviously, these projects survived their formative frictions, but their problems illuminate the difficulties in forming projects on an ad hoc basis.

From the perspective of sympathetic agency program managers, the breadth or expense of technique-aggregating proposals made for difficulties. Proposals to aggregate a wide variety of techniques attracted peer reviews that focused on the merits of individual experiments as contributions to the reviewers' and proposers' shared specialties rather than as complements to the other experiments, which reviewers felt unqualified to judge. One set of instigators only succeeded after submitting a second set of proposals, from which they had pruned the individually weakest links, and after the program manager let them make a presentation to the review panel explaining the inter-relations among the proposals [WCR][24]. Proposals for technique-aggregating projects, expensive relative to a program manager's usual budget, created pressure to develop new programmatic contexts. One set of instigators succeeded only after the agency established a new program, which the project justifiably monopolized because it consisted of the best developed set of precedent-setting proposals. Even so, the project operated with less than its full complement of measurements until the manager could build up the new program's funding [GISP2].

Instigators of projects that were both too broad and too expensive for their usual programs had to develop an interagency framework. One domestic American project [Parkfield] was predicated on instigators obtaining support from two sources of funds, and two others relied on funding from several national governments [ISCCP and WOCE]. (Even with international funds, American scientists in these projects needed support from multiple domestic agencies.) Program managers at the principal funding agencies considered endorsements from national scientific advisory bodies vital to justifying the extra efforts these projects would entail. Otherwise national advisory bodies were irrelevant to technique-aggregating projects. In some eyes, they were even to be avoided as institutions when seeking an organizational framework for a project. For advice, instigators of technique-aggregating projects sometimes set up ad hoc committees of senior scientists, to whom they or program managers could turn for help to deal with threats to the project.

[Table of Contents]
The instigators of technique-aggregating projects created SMOs, under one of the project's would-be PIs, to coordinate proposal writing and handle project logistics; often the SMO's PI was funded to hire managerial staff. The other PIs became an SWG that the SMO strived to satisfy. The SWG often appointed an Executive Committee to advise the SMO on matters that could not keep until the full SWG met.

This style of organizing technique-aggregating projects seems ingrained in geophysicists' culture. No interviewee spoke of any one SMO as modeled on a previous one. Rather, the combination of the SMO and SWG with funding agency oversight was a reflexive response to managing a project built on the temporary common interests of independent scientists.

A. Scope of Science Working Groups in Technique-Aggregating Projects
[Table of Contents]
SWGs were direct descendants of the project-forming workshops, but with only the selected investigators and their team members invited. The SWGs discussed and set policy for the collective issues that were embedded in a project's basic design. For example, scientists sharing oceanographic research vessels had to agree on the allocation of space and the track the vessels would take; scientists examining glacial ice had to agree on how to distribute the limited quantities of cores they could retrieve; scientists building synoptic, global data bases had to agree on data-quality standards and data-processing algorithms; scientists assessing earthquake indicators had to agree on protocols for comparing their data sets.

Rarely have participating scientists agreed to an SWG that went beyond managing what was intrinsically collective in its project. Nearly everything else was left to the discretion of individual PIs. The near total absence of engineers with managerial authority in these projects [GISP2 is an exception] may explain geophysicists' preference for broad individual autonomy and narrow collective unity. In space science projects, scientists sometimes broadened their collective interests to increase their influence with the managing engineers independently employed by space flight centers.

The scope of the SWGs did vary slightly because of different levels of commitment that participating scientists brought to project work. The more project participants had been involved in project formation, the more they considered expanding the SWG's scope. Also, the more novel the opportunity scientists saw in the project, the more willing they were to put up with the stress of negotiating over collective issues.

The most anarchic of the projects we studied [Parkfield] formed in a limited window of opportunity for instigators to tap the resources of two funding agencies. For expediency, the instigators initially included in the project only PIs from their own institution, who relocated the instrumentation they were already using. Those from other institutions who later obtained funding to relocate an experiment to a project-relevant area had no say in the project's structure, and the initial investigators had little say in the new investigators' presence. When the project's senior instigators left project administration to pursue new research opportunities, their younger successors found they controlled too few resources to initiate additions to or reforms of the work already taking place. The lack of participation in project formation and the "old-wine-in-new-bottle" research strategy made SWG members highly sensitive to their autonomy. One result was that the position of project administrator provided no potential for career-building accomplishments to younger scientists.

At the other extreme were two projects that formed themselves in workshops and provided participants with novel opportunities [WCR and GISP2]. Because the workshops enabled would-be PIs to fine-tune their proposals in response to one another, the SWGs authorized the SMOs to offer a centralized data management system to facilitate data exchanges (as envisioned in the project's formation) and to organize workshops for discussion of preliminary findings. The projects inspired participants to try new measurements, expand analytic techniques, develop sampling methods, and improve project logistics. Hoping to take fullest advantage of this ferment, some PIs viewed each of these projects as the first in a possible series. (In addition to these structural factors, both these projects were blessed with personalities who could remove rancor from disputes over logistics.)

Even in these most thoroughly organized projects, however, SWGs never adjudicated or mediated scientific disputes among members. The disputants either settled the matter to their own satisfaction or proceeded independently. Furthermore, while collective pressure from the SWG was brought to bear on individual PIs whose poor productivity put the project in a bad light, the SWGs never successfully imposed collective discipline on the content of scientific work. In only one instance did a PI try to stimulate the production of a scientific paper with project-wide authorship, and his efforts died for lack of support from his peers.

Projects that provided new scientific opportunities but few chances to develop technical specialties, or provided opportunities to develop specialties but only standard scientific opportunities, achieved a middle level of organizational structure, accompanied by less-than-crippling complaints about allocation of authority. Either participants suppressed their ambition to cultivate their technical specialty and ceded more authority to the SMO for data processing so that the project could produce data for which all participants anticipated uses [ISCCP]. Or participants suppressed unbridled pursuit of their scientific interests and ceded more authority to the SMO for data acquisition in order to align their technical specialties with the measurement of strategic parameters to a theoretical framework [WOCE].

B. Scope of Science Management Offices in Technique-Aggregating Projects
[Table of Contents]
The project-forming workshops were also occasions for would-be participants to assess who had the experience, personality, and resources to direct an SMO. Project logistics have usually absorbed SMO staff; getting people and their equipment to the site to take their data were challenging in all the cases we studied. However, the immense planning that field research can require was most intellectually significant in oceanography. In the land- and space-based projects studying the solid earth or the atmosphere, interviewees took note of logistics only if it was incompetently discharged.

In our two cases of ship-based oceanography, logistics was research policy. Planning where research vessels would go, how long they would stay there, and who could dangle how much apparatus in the water determined the parameters of the data sets that the PIs could hope to create. In the one oceanographic project [WCR], the power of making the first draft for the planning of cruises rested with the SMO, whose PI "very quickly jumped on e-mail" to insure he knew the needs of all participants. One PI thought the SMO had accounted for every minute of sampling time so that every scientist understood the upper limit. Indeed, one scientist, who initially provided a "service function," became so important to the logistics and so interested in the science that he was added to the SWG's Executive Committee.

By contrast, the less fondly remembered of our oceanographic cases [WOCE] left logistics in administrative limbo. The SMO, with input from the SWG's Executive Committee, set the basic ship tracks, but sampling schemes and divisions of labor among scientists with the same specialties were left to specialized subsets of the SWG, which were encouraged to plan without considering fiscal constraints. When those constraints proved more severe than expected, the scientists slated to oversee particular cruises had their plans upset as the SWG subsets reconsidered their sampling schemes and divisions of labor. One cruise member recalled having to work with "the primary turkey in all of oceanography" and has since insisted that they never again be on the same ship. Another's funding kept getting delayed by managerial problems to the point that the need for and opportunity to do other work led to a total defection from the project.

SMO personnel have also sometimes been responsible for creating and maintaining project-wide data bases. In almost all instances, however, a scientist who was considering publishing work based on another's data would consult the data collector in order to avoid pitfalls. The SMO data managers have not been active brokers in fostering relationships among scientists. SMO organization of post-field-work SWG meetings have been more important for the intellectual fecundity of projects. Numerous interviewees claim to have found inspiration for joint data analyses in the workshop presentations.

C. Scope of Funding Agencies in Technique-Aggregating Projects
[Table of Contents]
Funding agencies have usually left project participants to govern themselves. Program managers at funding agencies, once the agency selected the project's PIs, have had little involvement beyond monitoring progress and raising further funds to deal with unexpected problems or opportunities. One interviewee recalled that an earlier project was terminated because participants undercut their credibility by letting a dispute over rights to each other's samples come to a program manager's attention. Whether or not such stories are common in geophysicists' folklore, interviewees sensed that they were supposed to resolve differences over a project's strategy or tactics within the project.

Funding agency program managers became activists only if they wanted to impose a longer-term perspective over a project that offered a chance for developing resources useful beyond the project itself. This situation has been a prescription for difficulties. PIs in the projects have struggled to gain input into top-down reforms, which they resented even while recognizing their virtues. When a program manager's efforts ended in technical failures that hindered the project, PIs wondered whether they could have done a better job on their community's behalf. When program managers insisted that the engineers developing improved technology report to them rather than to the SMO, participating scientists felt slighted and were prone to doubt the good will of the engineers' institutions. Program managers see legitimacy in most of these criticisms. But when engineering development is "a community wide kind of operation," they felt obliged to grant engineers administrative autonomy from the project that stimulated the engineering work. When research scientists were torn between improving their efficiency by ceding a technical operation to a centrally managed facility and maintaining their autonomy by performing the operation themselves, program managers felt obliged to use their powers to push the researchers into what they believed to be the more efficient arrangements.

[Table of Contents]
Experiment teams in technique-aggregating projects have usually been single-institution groups consisting of a PI and the postdocs, graduate students, or technicians the institution and grants can support. They are usually bound by the PI's virtuosity in a form of measurement. Skills in analytic instrumentation, sampling instrumentation, or data processing and analysis have contributed in varying degrees to an individual's virtuosity. But the goal of each team has been to show that its form of measurement characterizes phenomena of theoretical or practical significance.

A. Origins of Experiments in Technique-Aggregating Projects
[Table of Contents]
Would-be PIs in technique-aggregating projects have routinely faced three inter-related technical difficulties (beyond the generic difficulties, which they share with strictly laboratory scientists, of improving signal-to-noise ratio, resolution, etc.). First, they strove for efficiency and user-friendliness in data acquisition in order to be less burdensome on a project's communal resources and thus more welcome to participate. Second, they strove for efficiency in data processing to increase their potential responsiveness to other PIs. And third, they strove to make their equipment operate reliably in the field.

Geophysicists in the cases we studied employed combinations of four strategies to cope with these difficulties. Some relied on data taken from instruments that others operated for other interests and concentrated on processing the data for use in geophysics. Some used standardized or communal instrumentation that was serviced by technicians who worked with all PIs funded to use the instrumentation. Those with a taste for engineering built the personal and institutional relationships they needed to design and construct equipment that improved their measuring powers. Finally, some purchased commercial analytic apparatus, which manufacturers competed to make flexible and effective, and customized the apparatus for geophysical use. All the participants in the projects we studied had strongly specialized in particular types of measurements, but those who used standardized techniques appear to have had the most opportunity to "poach" on other specialists' turf.

The strategy of relying on others' instrumentation was strictly the province of geophysicists working with satellite data. The data from NOAA's and NASA's earth-observing satellites provided a route into geophysics for scientists with data-processing skills and a desire to switch fields. One team started because a geophysics institute occasionally gambled its "venture capital" on outsiders; once the team had established its capabilities on the institute's projects, it attracted instigators of multi-institutional projects. In another project, a PI's work on satellite data added a new disciplinary identification to himself and his institution at a time the institution needed to diversify. Geophysicists in the latter 1970s appear to have split between those who thought data from extant satellites could be made useful and those who thought geophysicists needed to wait for a new generation of satellites designed with their needs in mind. The issue now seems resolved with newer satellites promising far better service for geophysicists' needs.

The strategy of using standardized or communal instrumentation was employed for two types of measurements: first, frequently made measurements that provided a frame of reference for other measurements (such as in-situ physical measurements that are valuable to oceanographers taking samples for chemical or biological analysis) as well as intrinsically interesting information for a geophysical sub-specialty; and second, those that were too expensive for individual PIs to perform. In two of our cases, communal techniques were imposed for the first time on PIs. In one, the PIs recognized that each had partial means for creating a global data set, and that combining the data streams was best done in one institution [ISCCP]. In the other, the PIs recognized that their type of measurement could not remain part of a project unless there were a collective analytic instrument capable of taking accurate readings from small samples [WOCE]. In both cases, the imposition of centralized techniques provoked suspicions or worries. Either the PIs suspected the central institution's scientists of favoring their own methods for forming a global data set, or the PIs worried they would have too little oversight of the central facility to have confidence in its results.

PIs with a flair for engineering usually focused on improving a measuring technique as a problem of generic importance for geophysics; multi-institutional projects provided opportunities to introduce what they had already developed or to make it better. One PI recalled sporadically working for 15 years on a measuring technique that coincidentally matured around the time the project we studied formed, and it became his ticket for entry into the project [WOCE]. One project provided three PIs with opportunities to build improved versions of data-acquisition techniques that they had already specialized in. A fourth PI included a co-investigator from outside his institution on his proposal so that the co-investigator could test out and teach others about his new technique for improving sensitivity [WCR]. Another project allowed a host of instrumentation designers, who had been independently developing separate means for making similar measurements, to deploy their devices in a manner that could make possible meaningful comparisons and combinations of data [Parkfield].

PIs who used commercial analytic instrumentation limited themselves to measuring concentrations of a small number of substances but retained flexibility in handling samples taken from a variety of sources. They have purchased the instrumentation from a commercial maker with money from a project-independent grant, their institution's resources, or a combination of the two. Then, when projects tried to form, these scientists proposed to measure their favored substances in the samples the project collected. If funded, they built specialized systems for preparing the samples for the analytic instrument. This kind of experiment formation was especially vital to the creation of projects. These scientists are a pool from which project instigators have sought recruits for their cause. They have also been a source of demand for innovation in sampling methods and expanded sampling.

B. Organization of Experiment Teams in Technique-Aggregating Projects
[Table of Contents]
Experiment teams in these projects rarely had an elaborate structure. They were small enough to be in one institution. In nearly every case, the PI was just that, not the broker of arrangements among several independent investigators. The importance of particular skills varied with the team's strategy. PIs working with satellite data had team members with refined computer skills to deal with the problems of complex algorithms and large quantities of data. PIs seeking to develop better sampling instrumentation stressed the value of machine shop personnel and engineers with design skills. PIs with advanced, adaptable analytic instrumentation stressed the importance of technicians for constructing sample feed systems and automating the operation of the equipment. PIs working with standardized, centralized equipment were the most independent.

There were two types of exception to the single-institution experiment team. First, PIs building instrumentation, especially those whose instruments would diversify a sub-specialty's techniques, often had close working relations with a company that could produce and capitalize on the scientists' instrumentation ideas. The company's personnel were not usually part of the formal team, but they were of manifest importance to the existence of the experiment. Second, young scientists or scientists with an innovative experiment, seeking to improve their chances for a spot in the project, often proposed to combine their plans with the conventional measurements of better established scientists. In these cases, all the scientists submitted separate proposals and, if funded, all held the title PI.

C. Organization of Data Acquisition and Analysis in Technique-Aggregating Projects
[Table of Contents]
The strategy for data acquisition was usually a collective issue that was settled in the SWG. The lone partial exception involved a project in which some of the experimenters had such stringent deployment constraints that these constraints rather than a collective strategy defined how they acquired data [Parkfield]. Experiment teams were not so large or complex that PIs had to represent different interests.

Individual PIs processed their own data sets to obtain numbers they could use with confidence. When a particular data point seemed implausible in its statistical context, but the PI found nothing amiss in its production, the PI typically "flagged" the data point to alert outsiders to its suspicious character. Otherwise, data processing was socially uneventful, except when a PI's funding shortfall or technical problems robbed others of data they might have used. Usually the projects' SMOs had funding for a small number of post-data-acquisition workshops, where the data streams could be collectively discussed, but there was never funding for enough workshops to leave the PIs feeling they had realized the full scientific potential of the project.

Policies for intra-project data distribution have been complicated by the variety of forms of geophysical data. Some experimenters obtained raw, digital data in real time from the operation of their instruments; others have had to take raw data in the form of samples (of glacial ice, ocean water, etc.) back to their home laboratories to perform additional analytic work. Dealing with time lags in the availability of different data streams while rewarding the investment of effort in obtaining particular data has generated most of the social conflicts in technique-aggregating projects.

Experimenters have had the easiest time with each other in two projects in which everyone's data were digital at the time of acquisition [ISCCP and Parkfield]. In these cases, everyone was ready to share equally processed data at the same time—indeed, one project issued its data in both a close-to-raw and a best-processed version so that participants could try using their own processing algorithms. The lone source of tensions was what participants viewed as healthy, unavoidable intra-project competition to produce the best results.

The major difficulties for experimenters in these all-digital projects involved outside data users rather than fellow experimenters. Because participants usually concentrated on interpreting the clearest signals, outside users, to avoid competing with participants on the participants' chosen topics, might look for signals near the threshold of the instruments' capabilities [Parkfield]. That has put participants in the uncomfortable position of criticizing the reliability of claims made on the basis of their own data. When one project released data to the public in fully and partly processed forms, participants felt that outside users claimed the wrong amount of credit (or blame) for their work [ISCCP].

The control of data was most problematic in two multi-disciplinary projects [WCR and WOCE] that included PIs whose experiments took data digitally and PIs whose experiments required they collect samples for later laboratory analysis. Digital data were promptly useful to sample-collectors, but samples were not promptly useful to digital data-takers. Sample collectors could use communal instrumentation to take digital data, but communal instrumentation has not generally existed for analyzing samples. Samplers tended to assume they would have immediate access to digital data they could have taken themselves; digital data-takers assumed that all PIs would release their data after a proprietary period that would have allowed the samplers to do no more than digitize their data.

Each of the projects took a different approach to data management. In one [WCR], whose instigators had done relatively well at managing the level of interest in the project through the workshops, the participants were comfortable enough with each to accept a data-formatting scheme for multi-disciplinary use. The collectors of the digital data placed their data streams promptly on the SMO's central computer on the assumption that sample collectors would do likewise. Not everyone analyzing samples reciprocated, but those who did thought the system superior to anything they had tried; the project, only half-jokingly, awarded one of the digital data collectors an honorary degree in a sampling-dominated discipline[25]. In the other project, whose instigators had done relatively poorly at managing the level of interest in the project, the project's administrators had to relegate the issue of merging data sets to a comfortably distant future. Until the data gets to the repositories, PIs are sharing data (or not) at their discretion, to the comfort of some and the frustration of others. And once the data does get to the repositories, it is uncertain whether there will be sound intellectual reasons for working with merged sets.

In almost all cases, discussions of data within the project started with assessing how geophysical parameters could be calculated from processed measurements. PIs, including those who proposed to combine measurements, rarely cared to question how other teams accounted for any instrumental or environmental effects on the data. In a few instances two PIs made similar measurements in the same project and became embroiled in methodological disputes in an effort to explain conflicting results. These disputes were settled (or not) without mediation from other project participants.

D. Dissemination of Results in Technique-Aggregating Projects
[Table of Contents]
Speaking at conferences and publishing in the scientific literature were individual acts of the PIs. Projects did occasionally organize special sessions at conferences or special issues of journals to feature scientific work done in the context of a project. Otherwise the projects almost never played a role beyond collecting and disseminating preprints to project participants and putting out a newsletter that reported on works-in-progress. There were no turf disputes over who in a project could publish on what scientific topics; if participants had not identified and reached an understanding with their potential intra-project competitors in the course of acquiring data, they did so at the formal post-acquisition workshops[26]. With papers always written on the basis of a subset of a project's data streams, the number of participants involved in writing any paper was small enough so that neither teams nor the project have needed a declared policy for the length or order of author lists.

In only one instance did a PI for an SMO attempt to stimulate a pan-project paper [GISP2]. The effort foundered for lack of enough cooperation and enthusiasm among the PIs to produce the paper in a timely fashion. PIs well on their way to addressing the proposed topic on the basis of their own data preferred to do so individually; those having trouble with their data preferred not to offer premature results.

[Table of Contents]
Geophysics projects do not form across a spectrum of possibilities but cluster around one of two ideal types. Technique-importing projects have arisen when leading scientists from competing institutions have agreed that important research opportunities were going unexploited because none of them could acquire the expensive methods that private industry had developed for acquiring new kinds of data. The scientists' institutions formed consortia to organize and manage a project. Technique-aggregating projects have arisen when scientists have felt unsatisfied with their intellectual or institutional abilities to research particular phenomena. They have created ad hoc SMOs within one of their institutions to organize and manage projects that attract practitioners of diverse techniques from many institutions. Both types of projects have to perform functions aside from the scientific work itself. These auxiliary functions include: obtain and administer funds, coordinate with related projects of other nations, facilitate career-advancing accomplishments for their participants, and keep participants informed of each other's activities.

[Table of Contents]
The National Science Foundation figured most prominently as the funding agency in the eight geophysics projects we studied. It exclusively funded three, was the exclusive American source of funds for a fourth that shifted from national to international support, and was the principal supporter of two others that received multi-agency support from the American government [COCORP, WCR, and GISP; DSDP; IRIS and WOCE]. The United States Geological Survey was the exclusive Federal-agency supporter of one other [Parkfield], and NASA funded the eighth, with contributions from NOAA and foreign governments [ISCCP].

The prominence of the NSF in these projects does not mean that the NSF has been equally prominent in geophysics generally. Geophysical instrumentation usually originated outside the context of any particular project; especially among oceanographers, the Office of Naval Research was cited as the principal source of the "venture capital" that created the techniques that made these projects conceivable. Geophysicists who received NSF support for instrumentation usually applied for "instrumentation grants" that were administered independently of a disciplinary program office. The Keck Foundation was also cited as a source of funds for helping universities and geophysical research institutes set up new laboratories for incoming faculty. The NSF may be so prominent in this study precisely because we are focusing on multi-institutional projects, which have been so expensive and so oriented to the concerns of university scientists that usually the NSF seemed the appropriate agency.

The funding structure for these projects varied principally with whether they imported or aggregated techniques, secondarily with the political and historical accidents of their formation, and thirdly with whether they had single or multi-agency funding. All three technique-importing projects began as solely NSF-funded projects; they adopted a consortium structure to internalize inter-institutional rivalries so that the NSF would not be asked to judge competing proposals that implied a lack of consensus over the project. All three started with "block grants," which created consolidated administration of the acquisition and deployment of instrumentation.

The NSF was most prominent but not dominant in the technique-aggregating projects, solely funding two of the five we studied [WCR and GISP2]. Other agencies found technique-aggregating projects appealing because the parts dovetailed with an agency's operating needs [Parkfield], contributed to its research interests [ISCCP and WOCE], or used its facilities [ISCCP]. Other agencies did not find technique-aggregating projects prohibitively expensive, even when the agency's primary mission was not the support of research, because the projects' limited objectives insured they would not become a longstanding burden. In all cases the funding was still structured as a collection of individual grants to several PIs and usually an additional grant to one PI to staff and operate an SMO. This structure insured that the PIs in the project had a roughly equitable stake in project governance.

A. Variations in Funding of Technique-Importing Projects
[Table of Contents]
NSF program managers and project administrators varied the system of NSF block grants for technique-importing projects in response to pressures from the scientific community. The strongest pressure stemmed from jealousies associated with success—as a project acquired significant data, more scientists and their institutions became concerned with whether project governance was responsive to their desire to perform experiments.

Two of the technique-importing projects we studied were governed by small consortia and managed by one of the consortium's member institutions [DSDP and COCORP]. In both cases, their successes led to pressure to involve more institutions in project governance or in data acquisition and processing. In one [DSDP], project administrators welcomed increased participation in project governance, and the NSF's program manager welcomed multi-national funding for a project whose technique could be deployed all over the globe. As a result, consortium membership expanded to include both more American institutions and institutions of other nations that were willing to contribute money to the project. The project now required more formal rules to insure that political factions would not dictate policy. But it retained its tradition of depending on negotiations among partisans of various sub-specialties to determine the best use of the project's resources. In the other project, the project's administrators resisted involving outsiders, and the NSF has cut back on the project's block grants to force project administrators to propose site-specific acquisitions of data, which the NSF can coordinate with other scientists' site interests. The result has been a change in project research strategy from the wide-ranging exploration favored by the project's administrators to more focused studies.

The third project [IRIS] always welcomed new consortium members. However, its funding structure was influenced by the initial advantage in organizing and executing experiments displayed by some of the few scientists who had gained experience by piggy-backing their research on oil-company explorations. To insure that the project's need for early successes did not become an ongoing basis for selecting experiments, the consortium members decided to give up block funding (and unified administration of acquisition, maintenance, and deployment of instrumentation) in favor of having the NSF handle proposals for experiments.

B. Variations in Funding of Technique-Aggregating Projects
[Table of Contents]
In all technique-aggregating projects, the SMO mediated between PIs and the funding agency. But because of the different traditions of the several funding agencies that provided support, the scope of the SMO and its authority within its jurisdiction have varied significantly. Even the projects supported solely by the NSF varied because of their contrasting historical origins.

Of the two projects totally funded by the NSF, the SMO and Executive Committee of one [WCR] were a more active filter between participants and the NSF than the other [GISP2]. The members of this more powerful SMO and SWG's Executive Committee, recruited scientists to propose interlocking experiments for the new project; when this package of proposals failed, they weeded out the scientists whose proposals had reviewed poorly in order to submit a stronger set. When the NSF accepted the second package, however, it funded the proposals individually through the home institutions of the PIs, not through the SMO. In the project with the less powerful SMO, the SMO proposal was drafted and distributed so as not to favor any particular experiment or proposer. Individual proposers made their own decisions on the risks and benefits of dovetailing proposals, and the NSF and its reviewers picked from the proposed experiments and PIs.

The three projects that were funded partly or not at all by the NSF showed even greater variation in the scope and authority of the SMO. In one [Parkfield], the primary funding agency, after allowing the SMO to negotiate the selection of instrumentation for the project with the secondary funding agency, left the SMO with so little science managing to do that the more established project instigators left project operations to pursue new research. Their less senior successors were unable to expand the significance of science administration. In another project [ISCCP], the primary funding agency bestowed near dictatorial powers on the SMO. Instigators were generally relieved because they saw no alternative method or institution to manage the project, they viewed the despot as benevolent, and the territory the SMO commanded did not create an institutional advantage for its scientists. In the third project [WOCE], the SMO was an active filter between proposals and the NSF, but the interest of other agencies in the project allowed some of what was filtered out to get back in, because final funding decisions were made at meetings of an inter-agency group of program managers.

[Table of Contents]
Among our eight case studies, two projects were internationally organized from their inception, one shifted to an international organization in the course of its operations, one had international relations that were of decided importance to the course of the project, two others were groping to form productive international relations, and one had incidental participation from foreign scientists. ("Foreign" and "domestic" are defined from the perspective of a United States citizen.) Only one of the eight was both organized and operated entirely with domestic scientists [WCR].

Several forces have encouraged internationalism. The combination of scientific and moral imperative with political convenience has been most potent for stimulating scientists to put the study of global processes (or processes that occur in many regions) on an international footing. That is, the projects made better sense to their organizers in an international context, and were most easily pursued when several nations' scientists secured cooperation from their governments. Other factors favoring internationalism were the desire to spread project costs across governments and the desire to broaden the expertise available to a project.

Technique-aggregating projects that investigated global processes most baldly displayed the forces promoting internationalism. Politics, finances, and a desire for the broadest possible base of expertise combined to make global, technique-aggregating projects international in origin, participation, and management. These forces were least present in site-specific technique-aggregating projects; those originated domestically, were managed domestically throughout their existence, and had incidental if any foreign participants if American scientists had ready access to the site. Technique-importing projects all originated domestically and became more or less thoroughly internationalized depending on how attractive internationalization seemed when balanced against the desire to keep management in familiar hands.

A. Arrangements for Internationalism in Technique-Importing Projects
[Table of Contents]
All the technique-importing projects we studied began as consortia of American institutions. All three have sought to internationalize, but with varying degrees of formality and enthusiasm. The major advantages of internationalization—access to more and better research sites and the spreading of project costs across governments—were offset by the realization that internationalization required broader sharing of managerial authority and broader sharing of the privilege to participate in data acquisition.

Two factors together determined where the balance in the conflicting impulses for internationalization fell: the means by which the consortia acquired and managed its instrumentation and the labor-intensiveness of analyzing the data. The more expensive and rare the equipment for acquiring data, the greater the incentive for the United States to share costs, the greater the value to other nations of joining in on what the Americans had started, and the easier to negotiate terms that preserved what the Americans considered the proper project management. The less expensive and rare the equipment, the greater the opportunity for several nations to launch projects and the more difficult to bring them under a powerful common framework. The more labor-intensive the post-acquisition analysis of the data, the harder for American scientists to keep pace with the rate of data collection, and the easier for them to share the privilege of collecting data with scientists from other nations.

The consortium that monopolized the instrumentation available for scientific purposes [DSDP] also collected samples that were labor-intensive to analyze. With both the expense of the project and the accumulation of samples favoring internationalization, the project turned thoroughly international. By paying dues, other nations could join the consortium and receive a seat on the executive and standing committees—provided that their representatives, in the eyes of the extant consortium members, represented their nations' scientific over industrial interests. The new members insisted the consortium add formal rules to insure that they would not be rendered powerless by bloc voting of American institutions. But the general consensus of interviewees was that internationalization did not change the project's culture, and one interviewee recalled a specific case in which the project held to its integrity over the desire of a member nation to perform research that looked more significant for industrial than for scientific purposes.

Internationalization in this project did burden its chief administrator with quotas for the participation of the several nations' scientists in data acquisition. This requirement occasionally created undesirable circumstances—scientists who initiated the consideration of a target have had to forego leading the data-acquisition team so that another nationality could have its share of team leaders; team leaders have worked with scientists they did not like; or a team leader has run out of space for valued colleagues because the chief administrator would not allow any more scientists of that nationality onto the scientific party. However, the consensus of interviewees was that such situations were annoying, not seriously debilitating, and that the value of making international contacts and tapping the broadest possible ranges of expertise far outweighed internationalism's problems.

The consortium that purchased instruments designed and built to its specifications [IRIS] did not have a monopoly on such projects in the world, and early in the consortium's existence, the members painfully reached a decision to use the consortium's data-management policies to "level the playing field" among themselves. Internationalism did not take root in the consortium's governing structure—even though it was a foregone conclusion that no nation could possibly obtain enough instruments or permissions to use sites to satisfy scientists' desire for global data—because the consortium was in no position to reopen hardware design considerations and data-management possibilities in order to integrate with other nations' projects. Instead, an American instigator has stimulated efforts to work out a post-hoc data-sharing regime. ICSU has granted the formal status of "federation" to the initiative, but no budget or authority has come with that designation.

The consortium that contracted out for data acquisition to firms doing generically similar work for oil companies [COCORP] has made both raw and processed data available to outsiders after its members published their interpretation of the data. The consortium's success has inspired several other nations to initiate similar projects. But with the consortium's scientists able to keep up with the processing of the data, and with other nations' scientists able to develop expertise and apply the technique to targets of their choosing, the only formal international activity has been a bi-annual conference. All international collaboration in research has been ad hoc, though the American consortium has begun to pursue international arrangements to escape changes its administrators dislike in domestic funding policy as well as to reach intriguing targets that would otherwise be inaccessible.

B. Arrangements for Internationalism in Technique-Aggregating Projects
[Table of Contents]
The ICSU, which is comprised of disciplinary unions and the national academies of member nations, and the WMO, which is an agency of the United Nations, provide means for organizing international symposia on global geophysical processes. For two of our projects, such symposia have served the function of workshops in which technique-aggregating projects form [ISCCP and WOCE]. Scientists in these meetings have convinced themselves of the significance of some global phenomenon and of the feasibility of measuring it if enough nations' facilities or scientists were coordinated for the task. They have petitioned ICSU and WMO to create program offices with the authority to appoint steering committees from leaders of the science discussions. These steering committees, in effect, became the proto-executive committees of proto-SWGs. The program manager and committee together searched for a scientist to be "seconded" to WMO for the purpose of heading an international SMO, which may be physically located at WMO or an institution willing to provide space.

The major weaknesses of this system are the obvious ones: WMO and ICSU can support meetings and project administration, but they have neither the money to support data acquisition nor the power to compel national governments to cooperate in the interest of the research. The scientists in the proto-working group have had to petition their national governments for the funds or equipment the projects needed. In one project we studied [ISCCP], money was not a daunting factor because governments were already separately collecting the relevant data for the project. However, cooperation between the scientists and the officials in charge of collecting data was occasionally difficult and in one case impossible to secure. Money was a problem for a project [WOCE] whose plans required that some nations' scientific communities either change their priorities or obtain more funds for their discipline. When the majority of governments wanted certain measurements concentrated in a particular region, the project lost most of the scientists and funding that was supposed to come from a major nation in a different region. When the scientists of another major nation decided not to fund those measurements in order to concentrate on others jeopardized by the first major nation's departure, the entire component of geographically concentrated measurements was set back. When a major nation did not, as its scientists had expected, support the project by increasing the relevant budget, scientists who considered the project a secondary priority threatened to pursue their other interests rather than sacrifice them to the project.

The WMO is a United Nations agency, but scientists have managed to keep the projects we studied from becoming entangled in ideological battles that have marred the operations of some United Nations agencies. ICSU memberships also have been contested on ideological grounds, but the projects we studied were not affected. Scientists seem more worried that poorer nations will prevent the taking of data in their territorial regions as part of the North-South politics over the legacy of imperialism and the responsibility for ameliorating global environmental decay.

Scientists in technique-aggregating projects that focused on a site rather than a global phenomenon were either indifferent to internationalism or found internationalism a contentious burden. When the site was in the United States or international waters close to its borders, American scientists were neither averse to foreign colleagues nor solicitous of them. In cases where foreign colleagues possessed valuable expertise, had adequate support from their own governments, and could tolerate isolation from verbal exchanges among American participants, they have been welcomed. When the site was outside the United States and not in a major scientific nation [GISP2], it attracted an international cast of competitive, would-be project instigators, who brought too many personal animosities and too many junior scientists in their wake. Negotiations over how to provide efficient logistics without creating unwanted centralized administration drove one administrator to threaten to resign his position, forced bitter rivals to unite while sundering longstanding collaborators, created an alliance of convenience between Americans in different institutional circumstances, and left lingering doubts over whether an efficient division of scientific labor had been sacrificed to political expediency.

[Table of Contents]
The geophysics projects we studied were based on the individual pursuit of three types of careers. First, obviously, geophysics projects needed research scientists who were accomplished in measuring techniques and who sought the research opportunities that only a multi-institutional project could support. Second, projects used geophysicists (and occasionally scientists from other disciplines) in administrative positions to organize and manage the projects. Third, projects needed engineers or industrial scientists who were willing to work in academic or government settings rather than in a for-profit business. The latter two have not been extolled as career objectives in American culture, and geophysicists have not created institutions comparable to the NASA flight centers, which actively draw scientists and engineers into project administration. However, the conditions of careers have pushed enough people with enough talent into projects that in only two of our cases did any interviewees feel that administrative failings had compromised scientific achievement.

Of the 61 American geophysicists we interviewed because of their participation as research geophysicists in our projects (as opposed to those we interviewed because of the administrative posts they held), roughly half either held non-teaching university appointments or performed their research at geophysical research institutes that were affiliated with a university, although not part of a university department. One-third had built careers in university teaching, and one-sixth worked for government agencies. While we did not collect information on the source of everyone's salary, oceanographers are known to depend heavily on "soft money,"[27] and the presence of seismologists and atmospheric scientists among the interviewees at geophysical research institutes or in non-teaching university appointments suggests that other geophysical specialists often rely on grants for part of their salaries. The pressure to raise money probably accounts for the tendency of geophysicists to specialize in a particular kind of measurement[28]. Only by specializing can a geophysicist constantly maintain the credibility to compete for the use of a technique-importing project's instrumentation or for a slot in a technique-aggregating project. For geophysicists who staked their careers on developing a new or improved measurement, multi-institutional, technique-aggregating projects have been important symbolically as well as intellectually; inclusion in such projects has been a sign that they and their technique had "arrived" as part of the panoply of accepted geophysical measurements.

Long-term projects have been a mixed social blessing for geophysicists. For research scientists contributing to technique-aggregating projects, especially those not carrying project administration burdens or not responsible for acquiring data with a regularity appropriate for students writing dissertations, long projects have provided welcome relief from the vagaries of fund raising. Others found long projects problematic. They either avoided technical work for the project, or made sure to supervise student research on other topics so that the time spent on the project would not create a career-stultifying hiatus in publication, or gambled by concentrating on using the project to develop a technique rather than working up older data for publication. Those with interests in graduate education worried that students working in long projects did not experience the full complement of activities needed to produce a scientific result; one rebelled outright when he felt the project had not created a decent niche for graduate students.

Two of our three technique-importing projects [DSDP and COCORP] have provided a steady source of low-level jobs for research scientists, who oversee the acquisition of data at the selected research targets. (The third [IRIS] has created lower level jobs in engineering but not science, because the consortium decided to restrict project personnel to servicing outside researchers.) For one project [DSDP], we lack information on whether these jobs were good science-career launchers; the lone such scientist we interviewed chose to quit science for family reasons. In the other project, these jobs were viewed as career-launchers. The project offered limited opportunities for upward mobility, and deliberately required that its lower level research scientists build industrially relevant skills in signal processing.

Research scientists from outside the joint management of instrumentation service groups benefitted from technique-importing projects in proportion to the degree of control they could exercise over the technique. The project that simply loaned instruments to outside users [IRIS] was most widely praised. It has enabled graduate students to be involved in more aspects of a large experiment than had been previously possible, and enabled junior scientists at universities to do experiments for which they previously could not have acquired instrumentation at all. In another case [DSDP], which put together science parties to research selected targets, researchers universally enjoyed practicing their several sub-specialties in each other's company. However, some came away from the gathering of data without a research agenda that could attract support on its own. Least socially satisfying was the project that contracted for data acquisition through one consortium institution [COCORP]. Outside research scientists who suggested a target and helped with field arrangements were not formally part of the team but were only invited to discuss the interpretation of results. They found much of interest in the data, but neither their universities nor their students benefitted directly.

Scientists, have traditionally moved in and out of administrative positions that can influence the development of science. But our sample of cases stresses the importance of experienced research administrators who left research permanently for administration[29]. All the technique-importing projects originated in funding-agency programs that were supervised by a career program manager; so six of the eight technique-aggregating projects did also. Two of the technique-aggregating projects had, in the eyes of some interviewees, scientifically limiting administrative problems. One of these sought funding from an agency program whose manager did not make a career of public research administration [WOCE], and the project suffered from having been planned without an accurate reading of the program's likely budget. The other [Parkfield] did have a career program manager, but his agency's experience was overwhelmingly with in-house projects, and nobody ended up in a position to coordinate the in-house and external experiments. (The scientists in the second project without a career program manager [WCR] did succeed at governing themselves.) Though any solid conclusion would require at least a comparison of successful with failed projects, it seems no accident that our case studies were largely shepherded through by career program managers.

Engineers and industrial experts, who linked academic geophysicists to the manufacturers or corporate services they required, came from four sources. Probably most crucial were trained engineers who were veterans of failed start-up ventures and were happy to forgo the risks and benefits of private entrepreneurship to work for a salary in a geophysics research consortium or institution. Four of our projects consciously called for technical innovations; such engineers were prominent in two of these projects, in the third a research scientist with prior industrial experience served this role, and in the fourth the effort at innovation failed. Another project relied on an industrial scientist who retired from corporate employment to a second career managing the project [COCORP]. And several projects benefitted from scientists who chose to start businesses to serve scientists' instrumentation needs. Only one of our projects made do without hiring an engineer or tapping industrial support; this was the project that relied on processing data that were already being collected for other purposes [ISCCP].

[Table of Contents]
All the geophysics projects we studied established communication hubs to collect information and pass it out to participating scientists at multiple institutions. However, in some projects, the hub has existed for the sake of the spokes while in others the opposite was true. In the former case, scientists at the spokes were prone to limit concentration of power at the hub and had to be convinced to contribute more information to the hub than seemed initially necessary. In the latter case, scientists close to the hub had to realize the importance of providing enough information to enable scientists at spoke institutions to satisfy their research interests. The principal factor that has determined the relative flows of information from spokes-to-hub or hubs-to-spoke has been the expense and technical complexity of the project's communal resources, which the hubs have controlled. Projects with esoteric communal resources had information-absorbing hubs; those with mundane communal resources had information-transferring hubs.

Projects with information-absorbing hubs were usually the product of a consensus among senior, prestigious scientists over the value of modifying an industrial field work tool for fully open research. Informal communication was a sufficient foundation to obtain funds for trial use of industrial equipment and then to proceed to a full-scale project proposal. The proposal itself would come from a senior scientist's institution or a new institution incorporated for the purpose of operating the project. All contributions to the proposal from scientists at other institutions would be incorporated at the proposing institution, not at the funding agency. When funded, such projects usually received "block grants," which enabled the project to consolidate administration of the equipment; the funding agency itself usually never became a communication hub. The project's administrators hired any engineers needed to help design equipment, contracted out for any engineering services that did not require specially designed equipment, and arranged for specific deployments of the equipment. Outside scientists contributed useful advice, especially with regard to where to deploy the equipment, through committees the project organized for that purpose. But outside scientists did not have responsibilities that required them to become large consumers of project information.

If administrators had wanted information-absorbing hubs to become monoliths (which they usually did not), they would have been prevented by political pressures to fund diverse projects from many institutions. Administrators have still protected the hub's autonomy by providing either finished instrumentation, fully processed data, or nearly raw data plus the needed documentation for processing. Thus the hub remained the repository of information on the development of instruments and the methods insider scientists used to process the data. External scientists could use the instruments, reprocess raw data by their own methods, or integrate processed data into their own analyses. Changing a project required external scientists to communicate their ideas to a project's advisory committees, which could press the change on project administrators or, if the administrators resisted, seek to involve the funding agency in project policy.

Projects with information-transferring hubs were usually the product of a more eclectic group of more junior scientists with a common interest in displaying their prowess under particular conditions. They often found each other at workshops, which a core set of scientists had organized in hopes of mobilizing interest. The would-be participants created hubs to collect and distribute their logistical needs and ideas and to produce a first draft of plans. Rather than submitting an integrated proposal through one institution, project advocates submitted a package of individual research proposals, with one institution submitting an additional proposal to provide for the project's ongoing logistical needs. PIs autonomously provided their own equipment and the technical support it required; the hub and other spokes became involved only if technical needs conflicted or project resources were too meager to accommodate all plans. Experienced administrators at the hub institution were important for spotting possible problems, wringing the most from project resources, and leading discussions of contentious issues, but they did not become thoroughly versed in the intricacies of the spoke scientists' methodologies and thus never served as conduits to the spoke scientists of each others' methodologies.

After data were acquired, pressures to produce the best possible science usually impelled spoke scientists to consider allowing the hub to collect the several data sets they were producing and make these data available within the project. However, lateral communication and data-sharing among spoke scientists with complementary scientific interests was also common. Often scientists would exchange or provide the hub with data that were only partially processed, which not only made for quicker discussions of their mutual implications, but also assured that scientists interested in each other's data would consult each other about the data sets' reliability and further possibilities. But in no case we studied were hub scientists successful at organizing project-wide authorship of a scientific paper. Publications involving scientists from multiple spokes were always the product of lateral communication.

(By Joan Warnow-Blewett and Anthony J. Capitos)


[Table of Contents]
This report is based on a number of sources: (1) an assessment of archival matters addressed in 101 interviews on the eight selected cases for geophysics and oceanography; (2) the patterns uncovered through the historical-sociological analysis of these interviews; (3) information from 57 questionnaires returned by our interview subjects concerning their record-keeping practices; (4) discussions with archivists at home institutions of interviewees; (5) 11 questionnaires from archivists and records managers at these home institutions; (6) seven site visits to discuss record-keeping with administrators and records officers (especially at funding agencies) involved with geophysics and oceanography projects; (7) discussions with National Archives and Records Administration appraisal archivists for four Federal agencies; and (8) the AIP Center's general knowledge of archival institutions in various settings.

Because of the AIP Center's relative lack of familiarity with funding agencies and research laboratories active in geophysics and oceanography, and because of the complexities and varieties of collaborative structures in these disciplines, the project's methodology emphasized site visits. Thanks to the historical-sociological analysis of project interviews, we were able to bring to our site visits a picture of the institutional structures and functions that had the greatest impact on the projects. The findings from the project's analysis and site visits, filtered through our previous knowledge of archival institutions, provide the single most reliable guide to identifying areas of documentation problems and potential solutions.

A. General Observations

[Table of Contents]
All of the geophysics and oceanographic projects studied by the AIP were funded by Federal funding agencies and subject to the reporting requirements of the agency. Federal agencies are required to retain successful proposal files, including the proposals and budget requests, peer and panel reviews, and progress and final narrative and fiscal reports. Because of these requirements, a bare bones minimum documentation—far less than desirable—of these projects is at the Federal funding agencies (and is supposed to be transferred later to the National Archives).

Otherwise, the responses to our archival questionnaires indicate that record-keeping practices of multi-institutional collaborations in geophysics are largely dependent upon the personal inclinations of the participants.

B. Data on Categories of Records
[Table of Contents]
We begin with data the project gathered on the eight selected cases in geophysics.

  1. Collaboration-Wide Mailings
  2. Electronic Mail
  3. Scientific Electronic Data

C. Circumstances Affecting Records Creation
[Table of Contents]

D. Location of Records
[Table of Contents]
Our investigations located categories of records that taken as a whole provide this documentation for all multi-institutional collaborative research; the number of record categories varied, but it was always less than a score. For any one project these core records are located at several settings. The main locations of records are at policy-making bodies (e.g., the National Academy of Sciences in the United States and—at the international level—the International Council of Scientific Unions (ICSU) and the World Meteorological Organization (WMO)), at national funding agencies, in the hands of administrators and selected staff at SMOs or consortium headquarters, and in the files of principal investigators (PIs) of projects. Some of these locations lack formal record-keeping procedures; some lack obvious repositories.

[Table of Contents]
Institutional archival policies are the key to the preservation of documentation. Our study focused on those individuals who participated in or administered our selected cases. For the eight geophysics and oceanographic projects included in the study, 61% of our geophysics interview subjects came from academia (53% from faculties and 8% from SMOs and consor- tium headquarters offices attached to academic institutions), while only 31% came from government science agencies or their contract laboratories. The final 8% of our subjects came from industry (2%), were employed at the freestanding consortium management offices (4%), or were employed by the ICSU or the WMO (2%). Although these eight percent comprise the smallest group, the records of these individuals may be the most valuable documentation concerning the design and planning of projects.

We first describe the current situation of Federal records schedules including the process of their approval by the National Archives and Records Administration. We next address record-keeping practices at the main locations for our cases in geophysics and oceanography: Federal agencies and their laboratories, academia, SMOs and consortia attached to academia, and new, freestanding institutes created for projects. We close this section with comments on record-keeping practices of other policy-making bodies: the National Academy of Sciences, the ICSU and the WMO.

A. National Archives and Records Administration
[Table of Contents]
The National Archives has determined that its General Records Schedules should not govern the disposition of research and development records. See Report No. 2, Part A: Space Science, Section 3.IV.A.1. for details.

B. Federal Agencies and Their Laboratories
[Table of Contents]
Our overall findings show that in general, Federal agencies and their laboratories do not document their research and development activities well. While each of these agencies varies widely in the completeness and application of their record schedules, all of the agencies we examined need to revise their record schedules in order to capture their research and development activities. These revisions could be carried out with the aid of the appraisal archivists at the National Archives.

1. National Science Foundation
The National Science Foundation (NSF) is currently the major funding U.S. agent for projects in geophysics and oceanography. All materials which concern a particular program the NSF is funding are placed in one file called a grant jacket. Records management at the NSF concerning science project records mainly consists of the retention of these jackets. This information includes proposals, reviews, correspondence, budget materials, and progress reports. In addition, the NSF has administrative records, including the records of both the NSF Director and the Deputy Director along with the records of the National Science Board with its Executive Committee and standing committees, and the Office of Legislative and Public Affairs.

The current NSF records schedule is being updated by the NSF's records officer. This schedule seems very complete in dealing with the grant jackets which document the NSF's involvement with scientific projects. These proposal case files are scheduled to be transferred to a Federal Records Center two years after the project is closed out. A project is formally closed out when the final report from the project is received. Because of this requirement, many projects are estimated to still be open due to the lack of a final report. Five years after the project is closed, accepted proposals are sent from the Federal Records Center to the National Archives while files concerning proposals withdrawn or declined are discarded. Along with these NSF project records, the administrative and planning records of the agency are also scheduled as permanent. The record-keeping practices at NSF headquarters appear to be following the records schedules and provide for adequate documentation of planning and funding of NSF scientific projects. It is important to note however that the schedules cover only records at NSF headquarters; NSF contract facilities do not produce Federal records.

Records of NSF contract laboratories are not usually well cared for. (The two exceptions we know of are the National Center for Atmospheric Research and the Scripps Institution of Oceanography, where independent archival programs are in place.) Since NSF-funded laboratories do not create Federal records, they are not scheduled by any Federal agency. These laboratories are required only to provide NSF deliverables including budget and progress reports. Due to this practice, many valuable records will be lost if they are closed unless their host institution or the National Archives takes responsibility.

2. National Oceanic and Atmospheric Administration
The records retention schedules for the National Oceanic and Atmospheric Administration (NOAA) are currently being rewritten by the National Archives appraisal archivist assigned to this agency. The new schedules will include a section concerning the records of research and development that will parallel the schedules already developed for the National Institute of Standards and Technology (NIST).

The schedule for NIST research and development records could indeed serve as a model for other scientific agencies. Under its provisions, project case files would include materials such as correspondence, memoranda, e-mail printouts, progress reports, working papers, etc. Prior to the closing of the project, the appropriate division chief is to use a list of criteria to help determine if the project is to be considered significant; these criteria include the awarding of a national or international prize, the work of a prominent NIST investigator, the fact that the project was subject to widespread media attention or received congressional scrutiny—just to name a few examples from the list. Records for projects meeting one or more of the selected criteria are to be retained permanently. This system of selecting significant project case files also covers research notebooks, project proposals, budget information, and planning files, which are to be transferred to the appropriate project case file.

Although the new NOAA schedule, when finished, will be a great improvement over the current schedule, it is the application of this new schedule which will determine whether the records management program will improve. As of now, some say records management has "broken down" due to the application of a traditional hierarchical records management system to an agency with a decentralized administrative structure.

As records are created, they are sent through various offices before they emerge from NOAA as an official document. After signature by an administrator, the record copy is passed back down to the creating division or unit. Administrative offices only have reference copies—if they keep them at all. This system creates two major records retention problems: possible difficulties of identifying the record copy, and a need for a strong records management program throughout the lower divisions of the agency. Unfortunately, the NOAA records management program does not seem to be strong enough.

The records retention schedules now being revised by a National Archives appraisal archivist will better fit the way NOAA is structured. This schedule will include provisions for significant project records with guidelines to help with their identification.

3. United States Geological Survey
We know of no plans to revise the current records retention schedules for the United States Geological Survey (USGS), but revisions are definitely in order. For example, the USGS schedules state that none of its project case files are considered permanent; they are to be destroyed after thirty years. These records would include contracts, technical records, drawings and photographs, progress reports, correspondence, planning documents, etc. This disposition is supposed to be followed unless superseded by an individual division schedule. Project proposals and laboratory project notebooks fall under the same fate as the project case files with which they are associated.

In practice the divisions do specify some project records to be retained—notably seismic and other geophysical data. USGS has been fairly consistent on the retention of the geological data from its projects because of their long-term usefulness for scientific and practical purposes. These records remain in USGS custody for seventy-five years before they are transferred to the National Archives. Some at the National Archives feel that the scientists have no interest in the preservation of their records beyond the published materials and scientific data. This is compatible with AIP's experience.

The USGS records management program appears to be severely understaffed. One of the main problems currently facing the USGS records management program is its lack of identity within the institution. Most of the USGS scientists we interviewed did not even know of the existence of a records management program nor what happened to their records. An education program to help scientists identify important records, along with helping them understand the records management program, would be a beneficial first step in securing the important scientific documentation of this agency. Two examples of this problem: one division chief at headquarters expressed certainty that there was no records program at USGS, and one USGS branch has not transferred any records to its Federal Records Center in more than a decade.

C. Academic Archives
[Table of Contents]
61% of our interview subjects were from academia. Even if we set aside the fact that some of these were at SMOs and consortium headquarters offices attached to academia, a substantial number remain on academic faculties (more than half). As we have found for other disciplines, academic archives have a long-standing tradition of documenting full careers of outstanding faculty (this is particularly true in English-speaking countries). Professional papers of geophysicists and oceanographers with distinguished careers would qualify for acceptance by most academic repositories following well-established procedures. We cannot afford to be too optimistic, however; current academic archives programs are suffering from reduced resources.

D. Science Management Offices and Consortium Offices Attached to Academia
[Table of Contents]
Other locations for the administration of our geophysics projects are at the SMOs or consortium headquarters offices created for a particular project. Some of these offices were situated at academic institutions, other projects were managed by new, freestanding institutions (see below). Details on the scientific management offices and consortium headquarters are to be found in Report No. 2, Part B: Geophysics and Oceanography, Section 2: "Historical-Sociological Report."

We found project administrative offices at academia located (with one exception) within a department of an institution of one of the PIs. Under the direction of this PI, the office is responsible for project administration and coordinating any workshops. The lifetime of these offices may be quite short depending, usually but not always, on the lifetime of the project. The records of this type of office would include information concerning the development of the project, the logistics behind the project, and (in virtually all cases) the records of any committees or working groups set up by the project. In our exceptional setting (the Scripps Institution of Oceanography, affiliated with the University of California at San Diego) a professional archival program has been in operation since 1981. In our other cases, these records will be lost unless arrangements are made with the academic archives for their permanent retention.

E. New, Freestanding Institutes
[Table of Contents]
In two of our geophysics cases new, freestanding institutes were created for the sole purpose of administering the project. These freestanding institutes are described in Report No. 2, Part B: Geophysics and Oceanography, Section 2: "Historical-Sociological Report."

One such institution we examined is the Joint Oceanographic Institutions, Incorporated (JOI). JOI is a consortium of ten American oceanographic institutions and is the prime contractor for the Ocean Drilling Program (ODP). JOI works closely with the Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES), an international consortium which advises the Ocean Drilling Program on both scientific and logistical endeavors. JOI receives money from the NSF and passes it to the Ocean Drilling Program, located at the Texas A&M University Research Foundation, to operate the program. Using JOI as prime contractor, the NSF is able to have ODP run without the legal restrictions imposed on a government agency.

JOI is thus not part of an institution which could retain its records at the conclusion of the Ocean Drilling Program. This problem is even more serious for JOIDES, where most of the scientific and policy decisions are made. Not only is there no existing repository for records of JOIDES, but it has a central office which rotates every two years to a different member institution. According to a former director, JOIDES meticulously saves and passes on its records, which includes every proposal ever submitted, minutes of all panel meetings, and audio tapes of the Planning Committee meetings. Some of these records are sent to JOI for storage, particularly workshop reports, but the rest are passed along to the next host of JOIDES. Because of the transient nature of the JOIDES office, an archival repository should be established—or arrangements made with an existing archives—for the permanent retention of these highly valuable science-policy records.

The other new, freestanding institution the our project dealt with is the Incorporated Research Institutes for Seismology (IRIS). Like JOI, IRIS is a corporation with its own headquarters and without affiliation with any academic institution. The three main programs of IRIS are the Global Seismic Network, the Program for Array Studies of the Continental Lithosphere, and the Data Management Center. Although IRIS is involved with individual experiments, its purpose is to facilitate science, not conduct it. Any proposals for experiments using its instruments have to be submitted to the NSF for funding and approval. Other than providing deliverables to the NSF and the archiving of scientific data, there are no mandatory record-keeping requirements imposed by the collaboration on projects supported by IRIS instruments. We know of no plans to retain records nor of a repository to place them in. According to our interview subjects, the most important records for documenting IRIS's planning and operations are the records of the committee involved with each of the three IRIS programs. These committee minutes are provided by the secretaries to the head administrator of IRIS.

F. Subcontractors
[Table of Contents]
Our general findings regarding documenting the work of subcontracts to industry apply to geophysics and oceanography. For example, "off-the-shelf" purchases are well documented in product literature, and fulfillments of formal contracts are documented in the legal contracts. On the other hand, innovative engineering in industry rarely results in a publication and much of the give-and-take between collaboration and company personnel does not produce documents. For more details see Report 2, Part B: Space Science, Section 3: "Archival Findings and Analysis."

Another problem is that a great many of the subcontracts went to small companies that may go out of business before long. In oceanography, the example of the research vessel Challenger built for the Deep Sea Drilling Project illustrates how insecure the documentary record may be for work done by industry. The vessel was built by Global Marine Corporation, which, since winning the contract, has undergone at least two reorganizations (one of them under Chapter 11 protection) and has moved.

In many of the geophysics and oceanography collaborations studied, the SMO's PI
was the primary intermediary between the collaboration scientists and the contractors, and the administrator's records provide the best documentation of industry involvement. In some cases almost all contracts and correspondence with contractors have been kept in a single office, as the Polar Ice Coring Office for the Greenland Ice Sheet Program. In some cases, as with IRIS, there was careful oversight by the program office of the work of contractors, and this increased the survivability of some subcontracting records. In some cases, as with IRIS, the collaboration issued a newsletter that contained information about subcontracts.

G. Other Policy-Making Bodies: National Academy of Sciences, International Council of Scientific Unions, and the World Meteorological Organization
[Table of Contents]
The most important records at the National Academy of Sciences for documenting projects in geophysics and oceanography pertain to three of its boards: the Atmospheric Science Board, Ocean Studies Board, and Polar Research Board. The records of these boards are kept as part of the Academy's Archives.

The ICSU is located in Paris. Prompted by lack of office space, it recently employed a consulting archivist to prepare an inventory of its records and develop an archival program. The records of ICSU consist of headquarters information including Executive Committee records, general assembly records, general committee records, yearbooks and publications. Along with its small archives, ICSU has a document center which is comprised of its own published materials as well as publications of its disciplinary unions and committees. These records, however, do not include records of the disciplinary unions, for example, the International Union of Geodesy and Geophysics, and their committees, where the policy making is done. Their records get passed along as the Secretariats move from place to place.

The WMO also has a small archives program. During our site visit, we were primarily concerned with the records of the World Climate Research Programme (WCRP), the management office for two of our cases. The WCRP is located at the WMO in Geneva, but is an offspring of the WMO and the ICSU. The office of WCRP's director is full of scientific and administrative correspondence, unpublished minutes of meetings, and so forth. The archivist at WMO would welcome accessioning the WCRP records (and ICSU agrees this would be appropriate), but the WCRP staff are not willing to part with what they feel are records of an independent unit.

(By Joan Warnow-Blewett and Anthony J. Capitos)

For the AIP project's recommendations on general steps that institutions should take to improve their documentation of large collaborations, see Report No. 1, Part B: "Project Recommendations."

[Table of Contents]
The purpose of these guidelines is to assist archivists and others with responsibilities for selecting records for long-term preservation. The guidelines are based on two years of field work by the project staff of the American Institute of Physics (AIP) Study of Multi-Institutional Collaborations in Space Science and Geophysics; they were reviewed by the study's Working Group, composed of eminent scientists and science administrators in these disciplines as well as archivists and historians and sociologists of science.

The scope of these guidelines is records created by multi-institutional groups who set national and international policy and groups who participate in collaborative research projects in geophysics and oceanography. A multi-institutional research collaboration consists of groups of scientists from a number of institutions and a consortium headquarters or Science Management Office (SMO) where the project was managed. The collaboration joins together for a period of years to design and construct apparatus and software, collect and analyze data, and publish results. Close to 100 interviews with members of eight collaborations were conducted using structured question sets to cover all phases of the collaborative process and the records created by the activities; science administrators were also interviewed.

The reader should bear in mind that most geophysics research has been and continues to be carried out without multi-institutional cooperation. Because there have been relatively few large, multi-institutional collaborations during our period of study (from the early 1970s to the near present), we have not felt the need to offer guidelines for identifying the most significant collaborations, but rather to recommend that documentation be preserved for all large, multi-institutional collaborations in geophysics and oceanography.

For more details on the role and evolution of the appraisal process, see Report No. 2, Part A: Space Science, Section 4: "Appraisal Guidelines."

[Table of Contents]
Records are created in the process of carrying out activities or functions. The most effective approach to appraisal of records is "functional analysis," in which important functions are identified and then the best documentation of these functions is located and preserved.

The key functions of all scientific activities can be summarized as the establishment of research priorities, the administration of research including development of instrumentation, the research and development itself, and dissemination. What follows is a brief analysis of these functions along with the categories of records created through these activities. We add references to the sections of these guidelines where further information can be found[30].

A. Establishing National/Multi-National/Discipline Research Priorities
[Table of Contents]
1. Hypothesizing and Defining Priorities

Establishing broad research priorities in geophysics and oceanography, as in space science, is done on a discipline level. When global phenomena seem important, priorities are worked out not only in national but in multi-national disciplinary organizations. This function of establishing research priorities is carried out in many different arenas. In the United States, the National Academy of Sciences' advisory boards, such as the Ocean Studies Board, the Polar Research Board, and the Board on Atmospheric Science, are sites for the scientific community to voice their opinions concerning broad program ideas. On an international scale, organizations like the International Council of Scientific Unions (ICSU) and the World Meteorological Organization (WMO), along with programs like the International Geophysical Year, have helped to set goals in the fields of geophysics and oceanography. In ICSU, priorities for broad areas to pursue typically rise up through one or more of the international unions for scientific disciplines (e.g., the International Union of Geodesy and Geophysics), its interdisciplinary bodies (e.g., the Scientific Committee on Oceanic Research), or its joint programs (e.g., the World Climate Research Programme). Through interaction with these groups and institutions, the scientific community promotes ideas for large multi-institutional collaborations.

Documentation: National Academy of Sciences' Ocean Studies Board, Polar Research Board, and Board on Atmospheric Science; International Council for Scientific Unions (its unions, interdisciplinary bodies, and joint programs), and the World Meteorological Organization.

The more specific hypothesizing and defining of priorities takes place as programs or projects are focused and shaped by the scientific community. In the cases we studied, we found two different approaches by research scientists: obtaining funding for formal workshops (usually employed by "technique-aggregating" projects) and informal gatherings (usually employed by "technique-importing" projects).

In the formal workshop approach, instigators for projects obtain support from funding agencies to hold workshops for interested research scientists which define the scope and methodology of the project, select members of an Executive Committee and an institutional base to serve as the project's SMO, along with a principal investigator (PI) to administer it, and initiate a set of proposals for submission to a funding agency.

For the international projects we studied, ICSU and WMO have been particularly influential in setting up workshops and symposia, which typically generate a number of workshop panels. If project proposals receive the blessing of ICSU and WMO, workshop panel members and other interested scientists submit proposals to their national funding agencies and ICSU's members—the national academies—feel pressured to provide support.

In the less formal approach, the process of establishing priorities for specific projects can be initiated wherever key research scientists get together. Meetings of the American Geophysical Union or review panels of funding agencies are examples. Some, but not all, consortia need funding to set themselves up and prepare proposals. In the technique-importing projects we studied, funding agency personnel played an important role in defining the terms of consortia formation and, in some cases, later project research activities.

Whether the approach is formal or informal, scientists involved in the instigation of geophysics and oceanography projects should take care in documenting these initial meetings and workshops.

Documentation: Minutes and other records of workshops and initial meetings of consortia, proposals to funding agencies, correspondence of program managers at funding agencies, professional papers of scientists.

2. Funding
In the geophysics cases we studied, domestic funding was provided by various agencies (and often more than one). The process involves submission of proposals to discipline program managers at funding agencies, peer and panel reviews at the program level and—for larger projects—review at the highest policy level, such as the National Science Board of the NSF. To be more specific, technique-aggregating projects submit a package of proposals to one or more funding agencies where a set of individual proposals (and, thereby, PIs) are selected. For the most part the technique-importing projects we studied were supported by block grants from funding agencies to the consortia which, in turn, selected proposals for using the imported techniques.

In some cases, detailed arrangements are made for processing proposals for specific elements of funded programs (see B.2, below). Finally, we note that consortia are funded, in part, by institutional members.

Documentation: Records of Federal funding agencies. Additional documentation, at higher levels not dealt with by our study, will be found in the records of university administrators, records of the Office of Management and Budget, and records of the U.S. Congress.

B. Administration of Research and Development
[Table of Contents]
1. Establishing Project Research Priorities

In technique-importing projects there would normally be a consortium responsible for appointing standing committees (or more than one, or one with subcommittees responsible for separate aspects of the project). These advised or directed project executives. A consortium in these projects proceeded in one of two ways: (1) it created an arena in which institutions could participate as equals even when one among them was made responsible for administration, or (2) it created a new independent, freestanding entity in which the involved institutions could vest responsibilities that they did not want any extant member institution to dominate. The technique-importing projects have needed to operate far longer—in order to apply the technique to many objects of curiosity—than the technique-aggregating projects. They have, therefore, adopted a more secure institutional base and more formal chain of command. Project executives include an Executive Committee and a chief administrator. Another key position at some project headquarters is that of staff scientist.

Documentation: Consortia headquarters records, records of Federal funding agencies, and professional files of PIs.

Technique-aggregating projects united multiple, independent PIs who formed a Science Working Group (SWG) that, in turn, selected members for an Executive Committee. In these projects, there would typically be a modest SMO run from an institution and under the direction of one of the PIs with grant funds to spend on coordinating logistics for the PIs.

Technique-aggregating projects, as compared with technique-importing projects, usually have a more ad hoc, informal institutional base in order to maximize self-governance. The SWGs for these projects are critical in managing what is intrinsically collective to the design of the projects, such as the allocation of space and the track of oceanographic research vessels, the distribution of core samples, a common data processing algorithm for combining data streams from several individual instruments, and protocols for comparing data sets obtained by deploying several techniques at the same site. That was usually the limit of power allotted to a project's SWG, although—for example—the Executive Committee of the working group might be called on at times to add a judgement of project relevance to the proposals to funding agencies. The rest was left to the discretion of individual PIs.

The SMO, under the direction of its PI, is responsible for the logistics of technique-aggregating projects. The office provides technical infrastructure and gets people and their equipment to the site where they can take their data. While this was challenging in all cases, it was particularly so for ship-based oceanographic projects as compared to land- and space-based geophysics projects. SMOs have also been responsible for creating centralized data management systems to facilitate exchanges of data streams and to maintain project-wide data bases. They have also organized post-field-work workshops for intra-project exchanges of preliminary findings, which—among other things—often inspired joint data analyses efforts.

Documentation: SMO's PI files including records of the SWG group and its Executive Committee.

2. Funding Arrangements for Individual Geophysics Projects
As mentioned above, the technique-importing projects we studied were mostly supported by block grants from funding agencies to the consortia which, in turn, selected proposals for funding; however, in two of these cases, would-be individual users had to submit proposals for approval by the funding agency.

Documentation: Consortia standing committees and subcommittees, program managers and proposal files at funding agencies, and professional files of PIs.

3. Staffing
Staffing of geophysics and oceanography projects is most visible in records of workshops and consortia and the subsequent funding process. Workshops and consortia select committees and science administrators; proposals, as a minimum, identify PIs and, often, prospective team members. Decisions to fund proposals are made at various levels of funding agencies or by committees of consortia. Additional information on staffing of projects would be in the records of chief administrators, staff scientists, and papers of PIs.

Documentation: Workshop and consortia records, SWGs and consortia committees, funding agencies, chief administrators, and professional files of PIs.

C. Research and Development
[Table of Contents]
Gathering and Analyzing Data
While preliminary plans for gathering and analyzing data were spelled out in proposals, the more detailed plans were developed by individual PIs and consortium administrators of technique-importing projects and by SWGs (made up of all PIs) and SMO administrators of technique-aggregating projects. Virtually all PI teams kept logbooks on the data-gathering techniques they employed (instruments, locations, and so forth) that would provide the metadata necessary for data analysis. The data gathered by the cases studied by the AIP included electronic data, cores (of ice, of sediment) and water samples.

Documentation: Consortium administrators, including staff scientists; SMO (SWGs and administrators), professional files of PIs, and databanks.

D. Communicating and Disseminating Results
[Table of Contents]
In most cases, collaborations in geophysics and oceanography required that each team produce an article that would be published with the others as a set—often as a special issue of a science journal. However, collaborations did not control the content or author lists of publications. Instead, it is the PI of each experiment who is in control of the team's data and publications. Members of other teams must obtain permission of the PI to use the data; in such cases, it is traditional that the PI would be asked to review the draft publication and be listed as an author. If a member of their own team prepares an article for publication, it is customary for PIs to review the article and be listed as an author. The inclusion of other members of the team as authors varies from case to case. Arrangements for making oral presentations are typically even more informal, although PIs would usually be aware of their team members' plans.

Documentation: Chief administrators at consortia and SMOs, professional papers of PIs and other team members, and press releases and other public affairs materials.

[Table of Contents]
A. Records of the National Academy of Sciences' Ocean Studies Board, Polar Research Board, and Board on Atmospheric Science

[Table of Contents]
In the United States, the National Academy of Sciences' advisory boards like the Ocean Studies Board, the Polar Research Board, and the Board on Atmospheric Science are sites for the scientific community to voice their opinions concerning broad program ideas. The impact of reports issued by these boards in geophysics and oceanography is particularly strong when relevant proposals are being considered by funding agencies. In addition, the National Academy of Sciences—like its counterparts in other countries—is a member of the ICSU; accordingly, the Academy usually influences the international programs that emerge through workshops instigated by ICSU.

The National Academy of Sciences should continue to save records of its Ocean Studies Board, Polar Research Board, and Board on Atmospheric Science as part of the Academy Archives. The records should include minutes, background papers, reports, and correspondence.

B. Records of the International Council for Scientific Unions and Records of the World Meteorological Organization
[Table of Contents]
The ICSU plays a major role in establishing broad research goals in all areas of the sciences. In ICSU, ideas for research programs typically rise up through one or more of its international unions for scientific disciplines, its interdisciplinary bodies, or its joint programs. In the projects we selected for study, ICSU joined with the WMO (a United Nations specialized agency) to provide support for organizing symposia on likely areas for research projects. When symposia generate promising plans, the instigators petition ICSU and WMO, who jointly decide whether or not to create a program office. From this office a program manager will appoint symposium leaders to a steering committee. The program manager and steering committee together search for a scientist to be "seconded" to the WMO for the purpose of heading an international SMO (which may or may not be physically located at the WMO).

ICSU should continue to save, as part of the ICSU Archives, records documenting support for symposia, workshop panels, and subsequent program administration meetings held with WMO. At WMO, the records of the Director of the World Climate Research Programme should be preserved and eventually transferred to the WMO Archives; these records should include minutes and reports of the steering committee and project workshops, and correspondence with—and other records of—the international SMO.

C. Federal Funding Agencies
[Table of Contents]
Federal funding agencies are responsible for supporting virtually all collaborative research in geophysics and oceanography. The core documentation is to be found in proposal files; the grant "jackets" (as the National Science Foundation calls them) include proposals with budget requests, peer and panel reviews, any significant correspondence concerning a project, and progress and final narrative and fiscal reports. In addition, some program managers at funding agencies played an important role in defining the terms of consortium formation and, in some cases, later project research activities.

Federal funding agencies should save grant jackets—i.e., proposal filesCfor all successful proposals and a random sampling of those for proposals that were rejected. In addition, correspondence of program managers actively involved with project research activities should be preserved. These records should eventually be transferred to the National Archives. Funding agencies that supported geophysics and oceanographic projects include the National Science Foundation, National Oceanic and Atmospheric Administration, United States Geological Survey, and the National Aeronautics and Space Administration.

A. Consortium Headquarters Records

[Table of Contents]
In consortia, it is the standing committees (and subcommittees where they exist) that tackle the most important issues such as determining the designs and specifications for instrumentation, staying abreast of industrial data-acquisition techniques, reviewing plans for deploying instrumentation, and—in most cases—determining the research done with the technique.

There is an Executive Committee for each consortium. Its importance at the inception of projects, during which project boundaries and ground rules are debated, is consistently high; during later periods of projects, the role of the Executive Committee has varied. The administrative head of each technique-importing project is a geophysicist called chief scientist, director, president, etc. Their importance in terms of administrative and intellectual power has varied. One consistent responsibility of the administrative head has been to develop formal rules for formatting data, curating samples, and publishing both preliminary, descriptive findings and a volume of refined, analytic findings.

A "staff scientist" works with each scientific party to insure that rules are followed, to serve as liaison between the scientific party and both the assisting project engineers and the chief administrator at project headquarters, and to facilitate communication and distribution of samples following data acquisition.

Project headquarters may be new freestanding entities for consortia that establish them; otherwise, they are an office at an instigator's home institution.

Historically valuable records of consortia headquarters should be preserved. These records would include files of the standing committees (and subcommittees), the Executive Committee, and administrative head of the consortium (chief scientist, director, president, etc.), and staff scientists. Where a consortium headquarters is attached to an academic institution, the most appropriate repository for its records would be the academic archives of the institution with which it is affiliated. Funding agencies should facilitate these arrangements through the contracts for consortia made with these institutions. If these arrangements fail, the records of the consortium should be offered as a gift to the Archivist of the United States and the National Archives and Records Administration. We expect documentation for the largest or most controversial projects will be found in the records of the academic institution's administrative and contracts and grants offices. Where consortia headquarters are at new, freestanding institutions created for the project, the records should be offered as a gift to the Archivist of the United States and the National Archives and Records Administration. Again, the funding agencies should facilitate these arrangements.

B. Science Management Offices
[Table of Contents]
An SMO, under the administrator (who is one of the PIs) is responsible for the logistics of technique-aggregating projects in geophysics and oceanography. Other responsibilities include data management systems and post-field-work-workshops. An SWG, made up of all the PIs, and its Executive Committee manage what is intrinsically collective to the design of the project.

Records of the administrator of the SMO and of the project's SWG (and its Executive Committee), including minutes of meetings and reports, should be preserved. In all cases, the most appropriate repository for these records would be the archives of the institution with which the consortium is affiliated. Funding agencies should facilitate these arrangements through the contracts for consortia made with these institutions.

C. Scientific Data
[Table of Contents]
Most of the data are electronic in format and observational in character; these are both raw and processed. Their usefulness for long-term scientific purposes is unquestioned.

Observational electronic data are of permanent value. They should be provided by the original users with adequate metadata to make them of value to secondary users. The preparation and retention of data should follow the recommendations of the National Research Council's report Preserving Scientific Information on our Physical Universe: A New Strategy for Archiving Our Nation's Scientific Information Resource[31].

[Table of Contents]
Professional Papers of Principal Investigators and Other Scientists
Papers of PIs are prime locations for documentation of a number of topics. These include details of staffing of the project team, plans for data gathering and analysis, use of the data by team members and others on the project, publications based on the data, and correspondence and other communication with team members.

Decisions to archive papers of scientists who have served as PIs or members of their teams for projects in geophysics and oceanography should be made by archivists at their home institutions on the basis of their overall careers. If scientists have regularly played a leading role in important research, the records of their participation should be saved.

(By Lynne G. Zucker and Michael R. Darby)
[Table of Contents]

Overview: The Problem
Our central sociological problem can be simply stated: trust among scientists allows early sharing within the collaboration or at least active use of ideas and the theoretical models and data they generate. The potentially high scientific value of these ideas, models, and data are often recognized before they can adequately be protected through publication. If scientists do not feel sufficiently protected, they may move their best ideas and models out of the collaboration to another experiment or may even exit the collaboration. Trust, by allowing use of high value new ideas, tends to increase the overall value of the results of scientific collaboration.

When collaborations are large, multi-institutional, and often also multi-disciplinary (or involve multiple sub-disciplines), initial differences in backgrounds across scientists when they first join a collaboration make it difficult to produce trust easily. The fourteen collaborations we are studying in space science and geophysics differ in the degree of pre-existing common culture and organizational heterogeneity. They also differ in the degree to which strong administrative oversight provides a third-party guarantee of protection of intellectual contributions, providing a pre-existing, initial basis for trust.

The initial level of trust in a collaboration, then, is determined by the pre-existing social structure. But scientists (and to some extent administrators) also build new structures, some informal and some formal, during the course of the collaboration to deal with problems that occur with trust, such as conflict over interpretations of data or rights of access to data by scientists other than the principal investigator (PI). Our basic argument is this: scientists differ in the degree to which they will form new social structures to produce trust during a collaboration as a function in part of the adequacy of the pre-existing social structure in producing trust. Again, the adequacy of these structures is importantly determined by initial homogeneity in scientific background, initial homogeneity among organizations involved, and initial strong administrative oversight that can create homogeneity of expectations related to trust.

There is a second set of factors that importantly determine how much new social construction will occur: differences in scientific production across collaborations make trust more or less important. The nature of the scientific tasks determine: (1) how appropriable the science being done is by other scientists, both within the collaboration and in the wider scientific community; (2) how accessible the science is, both in terms of whether others have any access to the data in the normal course of the project (e.g., coordination across tasks on the project increase access) and in terms of whether others can adequately interpret the data in the absence of detailed information on the instrument or direct collaboration with the PI.

Based on these differences in scientific production, we argue that collaborations have different optimal levels of trust production. Optimal levels of trust depend in part on the actual risks of misappropriation of data, most often a function of pre-existing social structure that regulates behavior (norms regarding legitimate use of data, for example) and in part on the degree to which trust is causally related to productivity, for example lack of trust would interfere with coordination among PIs. Therefore, the demand for trust that results in actual social construction during a collaboration increases as a function of factors that alter that optimal amount, including: (1) amount of mistrust actively produced by lack of common understandings, such as conflict over interpretation of data; (2) amount of joint action necessary, such as in coordinating construction of shared vehicles and constructing instruments across organizational or even national boundaries; (3) degree to which data from instruments do not require PI interpretation; and (4) degree to which work of the collaboration cannot be fully subdivided, walling off different understandings within small, project-based teams or, especially in space science, within country-based teams[33].

When the ideas and tests of those ideas proposed by the collaboration are inherently more valuable in science, there will be more competition over rights and access to the resulting data. Sociologists of science most often argue that wider access is an unalloyed good, providing for faster advance of science as ideas are shared in a common "resource pool." But incentives in science importantly include the rights to discoveries and related instruments and data that the PI develops and funds; it is arguable that less good science is produced when these incentives are not available as in the familiar commons dilemma. Thus, as the value of the science produced by the collaboration increases, demand for trust increases independent of other factors. We also argue that when trust is lacking on a particular collaboration, the leading-edge scientific ideas are much more likely to be "moved" to another project and tested where there are stronger proprietary rights. So there is two-way causality in this case: if the ideas in the collaboration are very valuable, then it is more likely that trust will be constructed; and having high levels of trust predicts higher levels of scientific value, because if trust is insufficient the ideas are likely to be moved to another project where trust is higher.

So far we appear to be arguing that social construction of trust is a positive asset, and hence there appears to be no reason to suspect that trust would not be constructed to facilitate interaction and exchange in scientific collaborations. But, in fact, social construction requires considerable redirection of human time and energy (Zucker and Kreft 1994; Zucker, Darby, Brewer, and Peng 1995). Thus, social structure is often constructed with considerable ambivalence, as a "last resort" that decreases scientific productivity because of the redirection of time and energy. Most scientists in our collaborations would agree.

In the following empirical analysis, we examine two central means of trust production (see Zucker 1986): Formal impersonal structures, such as rules, and informal—highly personal—social relations, specifically here collegiality inside a particular collaboration. The results we report here support our basic idea that pre-existing common culture and the demand for trust within the collaboration strongly affect how much formal and informal trust is produced in space science and geophysics collaborations. Further, we find that both strong administrative oversight, part of the pre-existing social structure, and formal rules on data and publication created during the collaboration, formed through emergent social construction, increase the production of high value science.

Trust in Collaborations: A Sociological Model
Organizational culture consists of patterns of behavior that are based on shared meanings and beliefs that facilitate coordination of activities within an organization by making behaviors both understandable and predictable to interacting members (Zucker 1977; Berger and Luckmann 1966). The typical organization has limited turnover of members and a relatively long history, so that transmission of organizational culture is only problematic for new members that constitute a small minority at any one time in an on-going organization.

In contrast, the large scientific collaborations we are studying are temporary or "ephemeral" organizations, even though they may last well over ten years. It is not only their brief life span as organizations, but also that the collaborations typically have much higher turnover of members, with most participating for just a few years even in collaborations lasting ten to twenty years: some use the data for their Ph.D.s or as part of a postdoc and then leave, most others collect data for a short time relative to the length of the entire project. Most participants have a separate primary professional affiliation even while they are part of the project. Thus, socialization to the "culture" of the collaboration is a continuing problem, one shared by many temporary projects although only acknowledged in passing in most treatments that focus on team productivity (Faulkner and Anderson 1987, Eccles 1981).

One solution to the socialization problem faced by collaborations is to involve scientists who are very similar in background socialization: when new members share common backgrounds and experiences with other members of the organizations, there is likely to be a substantial congruence in the members' interpretive frameworks (Schutz 1964). Thus, the transmission of culture is relatively easy in this context. A second solution is to create a common culture through oversight by another organization that has legitimate authority over the collaboration.

Space science and geophysics have developed somewhat different social structures that support the scientific work conducted within each area. These pre-existing structures include the extent of lead institution or government oversight, including oversight provided by lead laboratories, and the extent to which scientists involved are spread across specialties in different disciplines and sub-disciplines rather than representing a single discipline. Government oversight is generally much more directive in space science, through the National Aeronautics and Space Administration (NASA) and its laboratories, than it is in geophysics (Chandler and Sayles 1971). We argue that these structures are significantly related to trust that exists between scientists at the beginning of the collaborations we are studying. Simply providing a common framework provides some basis for trust, but NASA, by being intimately involved in the collaborations, can provide an independent basis for trust among collaborators, acting as a third-party "guarantee" for impersonal or formal trust (Zucker 1986; Zucker, Darby, Brewer, and Peng 1995).

When these pre-existing social arrangements involve a common pattern of prior socialization and thus common understandings, such as involving scientists all within one discipline or at most a few different disciplines, the overall initial level of trust is higher and there is a common social basis on which to develop remedies if trust is breached (Zucker 1986; Tolbert 1988). Another measure of the degree of cultural heterogeneity prior to the collaboration is the number of separate scientific review panels: use of different standards of review in multiple sub-panels, or different levels of screening or re-review for the same set of proposals, are indicators of professional dissensus. To the extent that there exists heterogeneity among the scientists involved, and no legitimate central authority to establish accepted practices and mediate disputes, then the initial level of trust is lower, as well as the ability to repair breaches of trust.

But all collaborations do not require the same degree of trust to "do good physics" that adds importantly to understanding of the phenomena under study. Two key aspects of team production in collaborations determine the demand for trust, and therefore the amount of social construction that we expect to occur during the collaboration: the degree of conflict caused by the lack of common culture and the degree to which joint action and teamwork across the sub-projects is required for production of valuable science.

For example, a comparatively high level of initial heterogeneity in a collaboration, and thus low level of initial trust, may not occasion the development of new rules to produce trust if the scientific queries and thus specific tasks require a low level of coordination and team work. To the extent that small pre-defined sub-teams that have been built up by the PIs can work independently and not make all hypotheses nor all data "public" to the rest of the collaboration, they may not need to trust those outside of their own small team. The same level of initial trust may, however, lead to social construction when coordination is required and especially when this coordination results in conflicts among members of the collaboration. Thus, an identical level of initial common culture may provide either adequate trust to ensure productivity of the group or inadequate trust, creating a demand for social construction.

If trust is adequate for the tasks at hand within a collaboration, it is unlikely that new social construction of trust will occur during the collaboration. Social construction is not costless, and therefore tends to occur only when there is sufficient demand for it (Zucker and Kreft 1994; Zucker, Darby, Brewer, and Peng 1995). The social overhead involved in constructing new social structure, including trust, is substantial.

So it makes intuitive sense that trust is not produced in task oriented groups unless it is at least expected to add significantly to the overall productivity of the human activity (non-task oriented groups, of course, may produce trust for other reasons related to their different goals). Further, as the value of the science produced increases, so does the demand for trust. Sometimes the high value of the science produced is correctly anticipated before the project begins; most often this is an emergent element in the collaboration.

Some of the most discussed scientific conflicts such as those in one scientific collaboration [Einstein] regarding data ownership even before data collection began are very clearly related to anticipated high value of the scientific output of the collaboration. And they were correct, ranging in judgment in the American Institute of Physics (AIP) interviews from one "modest success" to a range from "very successful" to "outstanding" or "tremendously successful." As one respondent put it, he "wouldn't trade the experience for a minute."

The conflict began as a heated negotiation between the two principal members of the collaboration over who was entitled to observe which part of the sky: scientists at Columbia initiated the discussion with scientists at Harvard's Smithsonian Astrophysical Observatory, the major players, along with MIT and Goddard Flight Center. These discussion led to development of the "Red Book," allocating observing rights to the 1000 best sources among those in the collaboration. As both gossip and the Red Book spread in the scientific community, the claim staking inspired community measures to break the monopoly; these measures were successful and created a very active and early "guest observer" program on Einstein. Still, for the first year "or so" of the mission, observing time and the data gathered from it was divided by formula among the principals: 40% went to SAO, 25% to MIT and Goddard, and 10% to Columbia, with the remainder going to guest observers selected in a competitive grant format.

A good test of the equity of the final decision on Einstein about allocation of the main scientific resource, observing time, is to examine whether or not the key scientists who designed Einstein ever participated in another project of similar scope. Given initial expectations regarding rewards for investments made as PI compared to the actual allocation of benefits, we would expect that the PIs would feel insufficiently compensated and would at least require an up-front guarantee about proprietary rights if they were willing to consider entering into a similar collaboration again[34].

Our formulation allows us to see social embeddedness not as a universal (contra Granovetter 1985), but as a variable. On the one hand, if the tasks are quite separable and independent, social embeddedness is much less necessary. On the other hand, to the degree that social structure is relevant to a task or set of tasks (Schutz 1964), it will be imported from other contexts or socially constructed. Not all action requires the same degree of social embedding, and the relevance of specific social groups to the action at hand determines within which social context embedding is desirable. By involving not just the PI, but the whole team of researchers that the PI has developed, our collaborations make efficient use of existing social structure, embedding the collaboration in a multiple lab and institution (usually university) context. At the same time, this strategy produces barriers to doing science across teams, since scientists in heterogenous organizations may have little basis for trust across their boundaries (see results in Zucker, Darby, Brewer, and Peng 1995). Thus, social embedding has real consequences for trust production, for social constraints on the science that gets done, and thus for the overall value of the science produced. And the trade-offs involved are seldom recognized and thus are seldom part of strategic design of the collaboration.

Prior research by our UCLA team indicates that trust on scientific teams can often be produced by informal means, through collegiality—though with potential third party enforcement within a single organization, most commonly a university (Zucker, Darby, Brewer, and Peng 1995). We have also theorized, but not tested empirically until our involvement in AIP's study of space science and geophysics, that trust is importantly produced by more formal, impersonal mechanisms: intermediary structures (here, strong oversight by lead institutions or government) and rules that "guarantee" the outcomes when trust is otherwise uncertain are both expected to be important (Zucker 1986).

To test the basic elements in our argument across fourteen major collaborations in space science and geophysics, our empirical analysis has four major tasks: to identify the pre-existing social structures in space science and geophysics, including both ones that are shared in common and ones that differ significantly between the two research arenas; to identify demand for trust production in terms of degree of coordinated activity required for the scientific experiment(s) and in terms of conflict generated within the collaboration; to identify the emergent structures, especially rules for data use and publication and new collaboration-specific collegiality, and to explain their production in terms of both the level of pre-existing trust and the demand for trust in each collaboration; and, finally, to specify the role both pre-existing and emergent social structures producing trust have in generating high value scientific output.

Measuring Important Sociological Variables in Space Science and Geophysics
The study of multi-institutional collaborations was initiated by Joan Warnow-Blewett and Spencer Weart of the Center for History of Physics, American Institute of Physics, in order to understand better how large-scale collaborations are organized, their special problems and benefits to physicists and to physics, and how these collaborations differ across different arenas of scientific investigation. Much discussion has developed around "Big Science," but little empirical research has actually investigated how it is organized and what precisely are the consequences of large, multi-institutional collaborations for the science that gets done. The initial study was of collaborations in High Energy Physics (HEP) by the Center for History of Physics of the American Institute of Physics (Warnow-Blewett and Weart 1992; Warnow-Blewett, Maloney, and Nilan 1992; Sisk, Maloney, and Warnow-Blewett 1992; Genuth, Galison, Krige, Nebeker, and Maloney 1992); the basic research design was modified and then extended to the study of collaborations in space science and geophysics. Our sociological team also produced a preliminary report to the American Institute of Physics (Zucker 1993).

At the beginning of this new phase of research, in consultation with the AIP Working Group on Documenting Multi-Institutional Collaborations in Space Science and Geophysics, it was decided not to compare HEP directly to space science and geophysics, but rather to study the two new areas independently. In our report, we combine our analysis of these two areas in most cases because there are few significant differences between them, following standard sociological analysis procedures. We do, however, test for significant differences in all of the analyses we run in order to justify considering all fourteen of the collaborations in a single analysis. Thus, for the most part, when we identify variables that are important in space science, they are also important in geophysics, and vice versa. We also do very limited exploration of differences within the more heterogeneous geophysics, comparing tectonics/climatology with oceanography; here there are even fewer significant differences, and thus we do not use these differences in any of the subsequent quantitative analysis.

Sociological Coding and Initial Analyses of Interview Data
The sociological team coded and fully analyzed a total of 197 interviews, 95 in space science across 6 collaborations, and 102 in geophysics across 8 collaborations. We also coded but did not analyze a small number of interviews for individual participants in related collaborations that either preceded the main collaboration of interest or were contemporaneous but involved different scientists. Two collaborations, both in geophysics, were affected by this decision. We did consider these interviews in our review of the overall collaboration, but felt that including aspects of a different collaboration in our measures of such variables as number of countries involved would be very misleading.

More specific detail concerning the collaborations studied in the AIP project can be found in the report of the project historian, Joel Genuth (1995a, 1995b). Our focus is primarily on the variables we coded, presented in Attachment A-1. We only used a small subset of these variables in our quantitative analysis, but provide the full list for comparative purposes. The specific variables that we use are identified at the beginning of the section on the quantitative analysis. In all cases we use the variable numbers contained in Attachment A-1.

There were four stages to our sociological analysis of the interview materials that preceded our quantitative analysis. First, initial content coding of each interview was done, using a coding format developed iteratively across ten of the interviews drawn from different collaborations and different areas of science. The coding format drew out specific areas, such as rules regarding data access and the type of authority exercised in each collaboration. In addition, each interview was also examined for information concerning the organization of the collaboration; based on what each respondent said about how the collaboration was organized, our coders created an organization chart designed to reflect how each scientist perceived the collaboration. Sometimes that chart describes the overall collaboration, while in other cases, the organization of government sponsors or of the PI's own project is described; thus, we have a phenomenological view from nearly every scientist of how the collaboration was organized.

One of our coders, Richard Johnson, coded nearly two-thirds of the interviews. Still, because three different coders were involved at various stages, we carefully checked inter-coder reliability across five interviews, coding them independently three times. Inter-coder reliability was acceptably high and thus information is combined from all three coders in the analysis that follows.

The next three stages were conducted primarily by Lynne Zucker. The second stage involved the following: salient points from each interview coding format that represented an individual scientist's or administrator's experience in the collaboration were transcribed in reduced form, including the number assigned to each person interviewed (to maintain anonymity), the employing organization, the self-reported discipline, Ph.D. year and university, role on project, report of prior or post contact with collaboration members, the instrument designed/built/used, who was responsible, and where it was built. Source of funding, data sharing, why each scientist or administrator joined the collaboration, success of the collaboration, individual contribution to that success, the measure(s) of success, and finally the number and type of meetings held rounded out the reduced coding done of the individual interviews. Nearly two-thirds of the variables in Attachment A-1 are drawn from this reduced coding format (64 out of 89 coded variables, with variables 90-103 constructed as a combination of other coded variables).

Third, the organization chart was quantified by combining the phenomenological perceptions of organization across scientists in the same collaboration. Sometimes these perceptions are quite similar, sometimes divergent. Specifically, to be able to use these measures of organization across different collaborations, we coded the number of countries, number of institutions, number of levels of the administrative hierarchy at its highest, the number of scientific review panels identified, the number of agencies/institutions involved in the review, and the number of subareas involved in the review, and whether the interviewee's primary focus was on the governmental level, the institutional level (e.g., the university), or the level of the scientific group. Variables 73 through 85 in Attachment A-1 were created using these "constructed" organization chart data based on the perception of administrators and scientists interviewed. Thus, these 13 variables are included in the 64 drawn from the reduced coding format described in the paragraph above.

Fourth, information directly taken from the interviews, from the individual-level coding, from the organization chart, and from archival and background materials on each collaboration both collected by AIP and obtained from university and government archives, provided the basis for an overall coding of each collaboration, at the level of the entire collaboration. Variables coded include number of countries involved, institutional project scope—both lead institutions and all other institutions mentioned at some point in the individual-level interviews, sources of funding at the collaboration level, whether instruments were newly developed or not and what they were, vehicles used, data sharing from the collaboration perspective, the time period of the collaboration—initiation, funding, data collection, and end, authority type and general authority relationships, coordination, conflict, archiving practices indicative of information flow, and collaboration history. Variables 5 through 8 and 15 through 35 are taken from this aggregate collaboration-level coding.

Selecting Collaborations, Number of Interviews per Collaboration, and Interviewees
Specific collaborations were selected by AIP staff in consultation with the AIP Working Group on Documenting Multi-Institutional Collaborations in Space Science and Geophysics. Selection of scientists and managers/administrators to interview was also done in consultation with the Working Group and was also affected by availability of key personnel. In one case, death prevented interviewing an initiator of a collaboration; in other cases, travel schedules didn't mesh. Overall, though, most of the targeted interviews that were screened to be most important were actually completed.

Number of interviews per collaboration was more interactively determined. Not only was the availability of key persons important, but also for several collaborations there was a decision that further interviews were very unlikely to produce more information. When we check for selectivity bias, including in the analysis only those 197 interviews fully coded and analyzed by the sociological team as discussed above, we find that it is reasonable to reject the hypothesis of bias. In Attachment A-2 we report the results, showing six separate regressions.

Our basic findings in Attachment A-2 indicate that, as might be expected, the number of all participating organizations most strongly predict the number of interviews, suggesting that a very sensible sampling procedure was used, given that the inter-institutional aspect of collaborations is among the most important variables. The number of different disciplines (or sub-disciplines) also shows a significant positive relationship to the number of interviews in Models 2 and 5, the only two in which it is entered. In Model 3, the instrument(s) having exciting design increases the number of interviews by about 3. In Models 4, 5, and 6, we find some indication that geophysics projects have 2 to 3 fewer interviews than space-science projects, but that result reaches statistical significance even at the 0.1 level only in Model 5. Space science does not differ significantly from geophysics in the number of interviews when only variable 3 is entered in Model 6.

Investigating Trust Production in Space Science and Geophysics: Qualitative Analysis
We have outlined a novel sociological model of trust, and explained in part how collaborations in space science and geophysics fit this model. In outline, the model predicts that initial social structure varies in amount of trust depending on common culture; if cultural heterogeneity cannot be walled off by segmenting teams or by segmenting tasks, that is if coordination and joint action is required, then demand is created for the social construction of trust and that demand is further increased as a function of the value of the science produced; trust is created by formal rules or by building informal relationships, and since these structures are costly to produce, they are produced only under sufficient demand; the value of the science produced is significantly affected by the level of trust as a joint function of pre-existing common culture and social construction through formal rules or informal collegiality.

We base our qualitative analysis on our detailed coding of the interviews from 14 major scientific collaborations in space science & geophysics/oceanography. Of course, all of these projects operate in a broad common culture defining in outline how science is done, what is "good physics" (evaluation standards), and what kind of team organizations are "reasonable." But within that broad umbrella, there is enormous variance.

Prior Social Structure: Low Trust Construction
One project high in prior social structure imported into the teams is COCORP (Consortium for Continental Reflection Profiling). Although four institutions are involved, our respondents outside of Cornell pointed out, somewhat heatedly, that while data was technically available to everyone, it was in fact held pretty close and because Cornell did the data processing "the sharing of the interpretational effort [with outside scientists] was superficial." The subdisciplines of those interviewed were either identical or very similar. And Cornell management was characterized as "executive style" to "autocratic," but also viewed as run generally in a fair way with "decisions based on scientific merit" and "democratic in its politics" at the subproject level. Since the results, though interesting, were unevenly characterized by our respondents from primarily important for the "quantity of the data received" (our emphasis) to "changing how geologists think about crustal geometry and evolution."

With strong prior social structure imported into COCORP, strong central administration, and mixed evaluation of the significance of the findings, we expect and find few rules and few conflicts over proprietary rights to data.

Very Striking Discoveries: High Trust Construction
The astrophysics community is a tight knit one (see Preston 1981), but in the case of the Einstein Observatory (HEAO-2) collaboration the initial strong physics, instead of astronomy, leadership structure and the high expected value of discoveries may have led to early development of rules regarding data sharing within the collaboration. As one respondent noted, the project "began as a purely PI Class mission, and there were percentages the data was divided up ... 40 percent for SAO, 25 percent for MIT and Goddard, 10 percent for Columbia." The Red Book, produced by the collaboration to divide the best 1000 "target" observations, also produced a heated negotiation within the collaboration over who was entitled to observe which part of the sky.

There are two divergent views among our respondents about what happened next. One view is that the PIs wanted to run this "as an observatory for the public good" and thus opened the collaboration to a guest observer program immediately and to full data access after one year. The other is that community pressure after the Red Book was made public forced the guest observer program and that data access remains partial and selective (only "good data or the clean data" ... "a small fraction of all the photons collected"). Thus, detailed rules were created early in the collaboration and often revised in the light of conflicts internal to the collaboration and in response to pressure from the outside scientific community. These conflicts continue even now over data access and interpretation. The actual success was as expected: one key respondent said "it made X-ray astronomy relevant to all fields and all disciplines of study [in astronomy]," while another emphasized that the "range of topics was enormous," and still another key respondent felt that it "changed astronomy's definition of what needed to be researched." One respondent noted that over 80 dissertations were produced from the observations. Many major discoveries were made, but the comment made by one participant in the face of much conflict and little collaboration among the four principal institutions really says it best: "Wouldn't trade the experience for a minute."

With high expected scientific value, it is not surprising that Einstein experienced conflict among the collaborators over proprietary rights to the data even before launch. But the amount of conflict generated by the larger outside astrophysics community was unprecedented, causing major changes in rules governing data access and especially access to observatory time, creating the guest observer program.

Subdiscipline Diversity: High Trust Construction
Two projects exemplify this type of collaboration well: WCR (Warm Core Rings) and WOCE (World Ocean Circulation Experiment). We will also briefly discuss Voyager, since it also included a wide range of subdisciplines, but did not occasion nearly the same level of social construction as did WCR and WOCE. The strong oversight by NASA of Voyager undoubtedly played a major role, as did low levels of required coordination across subprojects (other than that conducted by NASA).

The subdisciplines on WCR ranged across chemical, physical, and biological oceanography, and the instruments interfered with each other, requiring a strict limit on the number of instruments over the side of the ship at one time. A number of respondents reported conflict over this, which strongly limited observation time. There were also reports of major conflicts over data interpretation, and no monitoring of discrepancies nor attempts to reconcile them. While a number of the scientists reported reasonable success for their specific projects, overall the assessment was "no outstanding single results."

WOCE was even more ambitious in terms of the diversity of the disciplines brought together on shipboard. There was marked conflict over gathering samples between physical and chemical oceanographers, and the chemical group felt that they lost. On most cruises, the chief scientists was a physical oceanographer. There were a number of disputes about data sharing: "data collectors didn't want to share with modelers" and WOCE "lost the modelling community." There were many rules for data sharing, but these were routinely ignored, as were archiving requirements.

But WOCE data was also not judged to be exceptionally valuable by the oceanographers participating in the collaboration. One respondent stated that WOCE was "not designed to find earthshaking new technology," and it is "unbelievably routine." A major criticism was a lack of support for data analysis; only collection was supported financially. One respondent noted that he would never commit himself totally to WOCE, but was simultaneously working on 4 or 5 other projects. Another respondent noted that the project helps keep his lab going, but he is not doing significantly more science than he would have done without it.

Although the Voyager project also started out with a very diverse set of subdisciplines, intermediate between WOCE and WCR, just one basic rule regarding proprietary use was reported, and reports indicated generally good cooperation across teams and little conflict (conflict reported only between one team and the astrophysicists). Most conflict related to competition (teams were described as "both competitive and collegial"). The science produced was judged "spectacular," "a scientific tour-de-force." One respondent reported that Voyager was the highlight of his life, another stated that it was his most enjoyable project.

What accounted for the low conflict, given the subdiscipline diversity and valuable results? Perhaps the directive NASA leadership through the Jet Propulsion Laboratory (JPL), sometimes viewed by the scientists as "autocratic," provided the overall social framework.

We cannot adequately unravel these relationships in a single taxonomy or typology, because we expect that more than one factor affects how much social construction of trust actually takes place in a collaboration. So we now turn to a multivariate analysis, where we can enter a number of different variables into our analysis at one time. Given the wide variance we have depicted in our qualitative analysis, plus the many variables with only one case that differs, such as the role of the outside scientific community in Einstein, the fact that we are still able to explain much of the variance in trust production processes across our 14 major collaborations lends greater support to our conjectures regarding the conditions under which social construction to produce trust occurs.

Trust Production in Space Science and Geophysics: Quantitative Analysis
Our qualitative analysis provides the basis on which we developed our quantitative measures and models. We again follow our sociological model of trust, outlined at the beginning of the last section. We explore these same relationships in a series of analyses of the fourteen collaborations in space science and geophysics, beginning with basic descriptive statistics reported in Table 1. We begin by presenting the means and variance of the key variables in our analysis separately for space science and geophysics. As a quick perusal of the table indicates, there are few large t-statistics and few significant differences, correcting for the unequal number of cases. Even fewer of the t-statistics are significant for the comparison along these same variables within the more heterogeneous geophysics, comparing tectonics/climatology and oceanography.

Turning first to the space science/geophysics comparison, the NASA/NSF difference is of course expected, with NASA always involved in the large space science collaborations and with the NSF never involved, compared to a strong role for the NSF in the geophysics collaborations and a very minor one for NASA (mainly satellite tracking of ocean conditions and satellite weather data). Also interesting is the union of these sciences: the U.S. government is involved in funding and some degree of oversight of all fourteen collaborations. Pre-publication required review occurs within geophysics collaborations only. A significantly higher degree of coordination is required in space science as compared to geophysics. There is also a significant difference in very long periods of PI exclusive use of the data (over 12 months), with geophysics possibly "over-protecting," possibly diminishing the overall productivity of the collaboration.

For both of our measures of the value of the science, instrument having an exciting design or high-value science being produced, space science has significantly higher mean values than does geophysics. Finally, the year funding began is significantly earlier for space science, with the mean slightly over ten years different. It is arguable that collaborations in geophysics, then, have not had the same time period over which to payoff and the value of their contributions to be recognized; we will discuss this potential interpretation in more detail below.

If we briefly examine the tectonics/climatology differences from oceanography, we see that the number of different disciplines is significantly higher in oceanography; the number of government organizations involved is also significantly higher in oceanography. We also find that the number of vehicles and the degree of coordination required are significantly higher in oceanography. Oceanography also builds significantly more collegiality. Generally, though, the differences are so slight as to not warrant carrying forward this within geophysics comparison into our other analyses.

Table 1
Important Sociological Variables: Significant Differences Between Space Science and Geophysics
Space Sci.
I. Initial Social Structure
A. Cultural Heterogeneity
14: # dif. disciplines
85: # scient. review bds.
B. Oversight
1. Authority and Control
6: # lead orgs.
8: # govt. orgs.
22: NASA, y=1, n=0
23: NSF, y=1, n=0
101: administ. complexity
2. Org. Heterogeneity
9: # all orgs.
10: # univs. in all orgs.
29: # foreign $ sources
82: max levs. authority
83: # levels authority
II. Emergent Soc. Structure
A. Effects-Heterogeneity
50: PI excl. data use
57: pre-public. rev., y=1
59: conflict-data interp.
79: deg. sci. group focus
93: degree of conflict
B. Coordination-Jt. Action
36: shared vehicles
37: # of vehicles
42: 1 team only, y=1, n=0
91: deg. coordination req.
C. Formal/Informal Trust
92: # rules re data, pubs.
95: collegiality
103: PI 12+ mo. excl. data
III. Value of Sci. Produced
45: instmnt. excit. des.
102: high-val. science
IV. Control Variables
3: space sci. 1; geoph. 2
16: year $s began

Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001, based on 2-tailed test.
are for differences between the reported means & within geophysics between unreported means for climatology/tectonics and oceanography, assuming = vars.

Emergent Social Structure and Production of Trust
We are primarily interested in the process of creation of new structure in collaborations, and wish to estimate the relative impact of variables that we have identified as effects of heterogeneity (Section II.A of Table 1), and believe directly stimulate demand, compared to the impact of the level of pre-existing trust (Section I of Table 1).

In Table 2, the results for predictors of collegiality, variable 95, are reported. Given the small number of observations, we generally aimed to find simple, robust relationships involving few variables. In Model 1, we find that more oversight, as measured by the maximum number of levels of authority (v82), does significantly reduce the creation of collegiality; another measure of oversight, the number of government lead organizations (v8), has a surprising borderline (.1) positive significance. Since the latter variable is not significant when other variables are included, we dropped it from the analysis. Shared vehicles (v36, which should increase the need for trust) and exclusive-data use by PIs (v50, as an indirect measure of heterogeneity) both were significantly positive as expected in Models 2 and 3, but the dominant Models 4 and 5, explaining 73 and 76 percent of the variance, respectively, use only direct measures of heterogeneity (the number of different disciplines, v14) and oversight (measured by the maximum number of levels of authority, v82). For each of our dependent variables, here and below, we test whether there are significant differences between space science and geophysics in one or two full models as well as in a simple regression; the categorical variable 3 does not enter significantly in either Model 5 or 6 (or in any of the other models tested but not reported here). Overall, the pattern is much as we predicted, though with demand for social construction apparently resting most heavily on the initial heterogeneity and oversight control. However, we do find some support for the predicted effect of collaborations that need to coordinate action (v36, shared vehicles) and enact rules to protect the PI (v50) in increasing social construction.

Table 2
Predictors of Collegiality

Dependent variable: 95—sum of 3 categorical variables on whether team worked in an informal, collegial atmosphere. 
Ind. Vars.
Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
8: # gov. lead
14: # dif.
36: shared
50: PI excl.
     data use  
82: max levs.
3: space sci.
   1; geoph. 2        
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001.

We have similar expectations regarding the construction of rules regarding data and publications, but in Table 3 the only measure directly related to demand for social structure is significant in the wrong direction—the presence of conflict over interpretation of data (v59). With further variable construction and analysis, we could explore this coefficient further, by seeing if conflicts over interpretation of data tend to occur late in a collaboration, not providing sufficient time to develop ameliorative structure. Cultural heterogeneity, again, is having a strong independent effect in the predicted direction. In Model 1, number of scientific review boards (v85)—taken from the organization chart—are positive and borderline significant. A second more direct indicator of cultural heterogeneity, the number of different disciplines (v14), also has a significant positive effect on the number of rules created in the collaboration and indeed eliminates the significance of v85 when both are included (Model 4). We also find, as predicted, a significant effect of high value science (v102), predicting an increase in creation of rules in a collaboration as the value of the science goes up. Differences between space science and geophysics in the formation of rules within a collaboration never reach significance.

Table 3
Predictors of the Number of Rules
Regarding Data and Publications
Dependent variable: 92—sum of 8 categorical variables (1 yes, 0 no) on whether various types of rules on data use and publications are created in the collaboration.
Ind. Variables
Model 1
Model 2
Model 3
Model 4
Model 5
14: # dif.
59: conflict-
      data intp.  
85: # scient.
      rev. bds.
102: high-val.
3: space sci.
   1;geoph. 2    
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001

Social construction processes, then, empirically occur in two primary areas across our fourteen collaborations: development of new collaboration-specific informal collegial relationships, resting on "informal trust" that builds on personal ties and expectation of future interaction, and creation of rules that govern data sharing and subsequent publication, involving "formal trust" that is more impersonal and often guaranteed by third parties (for more on forms of trust, see Zucker 1986). In line with our theoretical argument, we find that both collegiality and formal rules are developed not by design, but on an emergent and "as needed" basis in multi-institutional collaborations in space science and geophysics. Social construction occurs during the collaborations, while conditioned importantly by pre-existing social structure: the rules are newly created, not part of a pre-existing professional canon, and collegiality is a function of teamwork on these particular collaborations, not based primarily on prior interaction or collaboration. In considering collegiality, we found evidence that pre-existing formal trust mechanism crowds out the production of informal mechanisms. Creation of formal rules on data and publications was conditioned on the presence of high-valued science worth protecting.

Predicting High Value Science: Collegiality and Rules in Production of Trust
As we suggested above, however, there is likely to be two-way causation between intellectual property rights secured by formal and informal trust mechanisms and the production of high-valued science: for the individual scientist to perform at his or her peak within the collaboration, there must be trust that others in the collaboration will not use the intellectual ideas nor the experiment results, including data, in ways that jeopardize the "ownership" of the individual scientist. In this way, the success of a collaboration rests in part on the ability to ensure trust sufficiently that the scientists involved feel comfortable employing their best ideas and talents. This was acknowledged numerous times in the interviews:"if the PIs deliver on what they have promised, this will be a very successful collaboration"—but that delivery was sometimes in doubt.

In Table 4 we examine the predictors of the scientific value of the collaboration (as judged by the scientists and administrators personally involved), including social construction of rules regarding use of the ideas and data by others and the development of collaboration-specific informal collegiality. We include the number of rules regarding data and publication (v92) in all of the models, and it is significant and positive across the board. But collegiality (v95) is not significant. One aspect of the pre-existing common culture has an estimated positive effect across Models 1, 2 and 3 (although not statistically significant in 3): the number of lead organizations that are government organizations (v8). Also, not surprisingly, year of funding is negatively related to high-value science since it takes some time to produce sufficient output for the success to be judged. Interviewees on several of our more recent collaborations felt it was too soon to judge success. Note that although a significant difference between space science and geophysics in the production of high-valued science is reported in Table 1 or, equivalently, in the simple regression Model 5, there is no significant difference due to v3 in Models 2-4. Therefore, the apparent difference in production of high-valued science is probably due to differences between space science and geophysics in the more basic determinants such as when the funding began. (On average, space science collaborations preceded the funding date of the average geophysics project by slightly over ten years.)

While as social scientists we are focusing on social construction processes and the trust that is produced by them, these social processes are obviously not the only factors in producing scientific output. There are a number of aspects of the fourteen collaborations studied that we believe are likely to be extremely important in producing high quality, valuable science. One factor is the talent and energy of the scientists involved in each collaboration. Sometimes this is important especially for the initiators of the collaboration; they develop the initial ideas, and if the collaboration is tightly focused, then the quality of these initial ideas may determine the overall success of the collaboration. We lack strong independent measures of the quality of the initial ideas, such as ranking of proposals by peer review, and also would face the problem of a lack of common metric across different disciplines and their diverse review panels.

Table 4
Predictors of High-Valued Science
Dependent variable: 102—1 if respondents report major discoveries and interesting results and high overall success; 0 otherwise.
Ind. Variables
  Model 1 Model 2 Model 3 Model 4 Model 5
8: # govt.
16: year $s
92: # rules re
      data, pubs.
95: collegiality
3: space sci.
    1; geoph. 2  
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001.

To the extent that the collaboration is less tightly focused, differences in ability and in "scientific taste" of the individual scientist are extraordinarily important, and are the strongest predictors of why some collaborations are successful and others are not. Based on our research in the biosciences, we expect that the top one or two percent of the scientists produce about thirty percent of the science; these high producing scientists also tend to be the most highly cited (Zucker, Darby, Brewer, and Peng 1995; Zucker, Darby, and Armstrong 1995). As a collaboration involves more of these high producing scientists as PIs, its early productivity should increase (once the wider scientific community has access to the data produced by the collaboration, we expect other factors to emerge). But we lack data on the productivity of scientists prior to the collaboration and have only spotty information on their productivity within the collaboration for space science and geophysics, and also would face in any case the problem of comparing across fields of science with no common metric.

A second set of issues in determining success is that we have measures of subjective success, in terms of how individual scientists who were interviewed by the project saw their own research and the research of the collaboration as a whole, but we do not have general measures of overall output, such as number of publications or, better, quality adjusted number of publications (taking into account citations to the research). But again it is not clear, when across discipline and research arena assessments are being made, whether such measures are in fact valid. They are comparable only within the same research area, rather narrowly defined; scientific sub-communities cite each others work, while highly related work defined as on another scientific problem will be ignored.

Further, the output of each scientist is importantly determined by adequately providing resources to each scientist to develop and carry out a challenging research agenda. We only measure this when scientists mention in interviews problems with underfunding. In the fourteen collaborations we have studied, sufficient resources were almost always made available; in the two collaborations where they were not—both in geophysics—one PI simply quit the project while the other PI delayed analyzing the data until, if, and when more resources were made available. More commonly, social conditions either facilitated or hindered scientists' ability to make use of the resources. Some of the obstacles were simply bureaucratic: if the resource pie is divided up among too many different investigators, for example, sometimes done in the interests of involving the entire research community in a very large scale project, then no one investigator may have the necessary resources to really do the science right.

Creating Demand for Trust: Explanations of Required Coordination and Conflict
We had anticipated that the degree of required coordination would prove to be a significant determinant of the demand for trust, but neither this variable nor its components seemed to play a substantial role in the determinant of informal or formal trust mechanisms. Nonetheless, Table 5 examines the degree of required coordination which is of some interest in other regards.

Clearly, space science requires more coordination even when other variables are accounted for. There is a robust negative correlation between the number of levels of authority (v83) and the degree of required coordination, which is more puzzling than the positive impact of more numerous government lead institutions.

Interestingly, the degree of conflict experienced per se did not contribute significantly to explaining either the construction of formal or informal trust mechanisms during or the perceived productivity of the collaboration. Nonetheless, more conflict may make for less happy collaborations, so we examined its determinants in Table 6. The strongest predictor of conflict was everyone being on one team (v42). We hypothesize that subdividing the work and the interaction into separate more congenial teams may be a useful conflict-avoidance device or, at least, forced fraternization is more likely to lead to conflict. Interestingly, there also seems to be a payoff to more complex administrative superstructure in conflict reduction. Among other variables considered, the numbers of vehicles and scientific review boards came closest to borderline significance, but no robust relationships were detected.

Table 5
Predictors of Degree of Required Coordination
Dependent variable: 91—sum of 4 categorical variables (1 yes, 0 no) on whether vehicles required coordination, instruments were constructed across institutional or national boundaries, and analysis involved coordinated data.
Ind. Variables
Model 1
Model 2
Model 3
Model 4
8: # gov. lead
79: deg. sci.
      group focus  
83: # levels
3: space sci.
   1; geoph. 2
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001.


Table 6
Predictors of Degree of Conflict
Dependent variable: 93—sum of 3 categorical variables (1 yes, 0 no) on whether conflicts over data use, co-publishing, and authority existed.
Ind. Variables
Model 1
Model 2
Model 3
Model 4
Model 5
37: # of
42: 1 team
      only, y=1
85: # scient.
     rev. bds.  
101: administ.
3: space sci.
    1; geoph. 2      
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001.

Predictors of Lengthy Privatization of Ideas and Data
We hardly need to document the resistance to building social structure that exists in science. We have already discussed the cost of the social construction of rules and collaboration-specific collegial relations in terms of time and energy, and thus the positive role that the new structure is likely to play in scientific production in order to justify the diversion of time from the scientific enterprise.

Table 7 provides some information about a second major cost: the potential interference of these structures with the free movement of ideas, so that the new social structure may in fact produce the problem it was designed to prevent. Clearly, this is one area where too much structure may be as harmful as too little: if, for example, rules heavily proscribe use of the data by others for long periods of time, then protection of intellectual property may retard scientific progress more than if the efforts of some of the PIs are moved to other projects because of lack of trust within the collaboration.

Table 7
Predictors of PI Exclusive-Use of Data > 12 Months
Dependent variable: 103—1 if the PIs have an exclusive-use period for data and the maximum reported period exceeds 12 months; 0 otherwise.
Ind. Variables
Model 1
Model 2
Model 3
Model 4
10: # univs.
     in all
22: NASA y=1,
29: # foreign
     $ sources
57: pre-publ.
3: space sci.
     1; geoph. 2    
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001.

The number of universities among all organizations (v10) is significantly and robustly positively associated with longer PI data exclusivity, possibly because of increased competition in the collaboration over publication. Pre-publication review (v57) as an indicator of lack of trust in the scientific judgement of other PIs generally also increases the probability that the PI retains exclusive use of the data for over 12 months. The number of foreign funding sources (v29) decreases the probability significantly in the first two models, although there is sufficient correlation among whether NASA is involved (v22), v29, and v57 that separate effects are difficult to pin down. When other variables are entered into the equation, there is no significant difference between space science and geophysics collaborations. However, taken alone, geophysics collaborations are slightly more likely to involve exclusive use of the data for over 12 months.

We do not want to overgeneralize on the basis of fourteen collaborations and data which were primarily gathered for other purposes. In particular, we feel more comfortable with inferring the existence of significant relationships which we were able to uncover with such few cases than with inferring the nonexistence of statistically nonsignificant relationships. That caveat aside, we did find some interesting results.

First, there was reasonably strong evidence that factors increasing the demand for trust— group heterogeneity and high-valued discoveries—increased the actual construction of formal and informal trust-creating devices (rules and collegiality). Second, there was some evidence that pre-existing trust-creating structure obviated further construction of trust mechanisms during the collaboration. Third, at least formal trust-creating mechanisms constructed during the collaboration appeared to have functional value by enhancing the value of the science produced.

Generally, we found that space scientists and geophysicists behaved similarly and differences in outcomes reflected differences in underlying determinants, not scientific area. However, collaboration in space science clearly require a greater degree of coordination than in geophysics. We found no relation between the degree of conflict and the construction of trust mechanisms or collaboration productivity, but conflict seemed to be increased when all scientists were forced to be members of one team. The greater the involvement of university scientists, the more likely were PIs granted exclusive use of data for more than one year.


Berger, Peter, and Thomas Luckmann (1966) The Social Construction of Reality. New York: Doubleday.

Chandler, Margaret K., and Leonard R. Sayles (1971) Managing Large Systems. New York: Harper & Row.

Eccles, Robert G. (1981) "The quasi-firm in the construction industry," Journal of Economic Behavior and Organization 2:335-357.

Faulkner, Robert R., and Andy B. Anderson (1987) "Short term projects and emergent careers: evidence from Hollywood," American Journal of Sociology 92:879-909.

Genuth, Joel (1995a) Report No. 2, Part A: Space Science, Section 2: "Historical-Sociological Report," AIP Study of Multi-Institutional Collaborations, Phase II: Space Science and Geophysics. Center for History of Physics, American Institute of Physics.

Genuth, Joel (1995b) Report No. 2, Part B: Geophysics and Oceanography, Section 2: Historical-Sociological Report. AIP Study of Multi-Institutional Collaborations, Phase II: Space Science and Geophysics. Center for History of Physics, American Institute of Physics.

Genuth, Joel, Peter Galison, John Krige, Frederik Nebeker, and Lynn Maloney (1992) Report No. 4: Historical Findings on Collaborations in High-Energy Physics. AIP Study of Multi-Institutional Collaborations, Phase I: High-Energy Physics. Center for History of Physics, American Institute of Physics.

Granovetter, Mark (1985) "Economic action and social structure: A theory of embeddedness," American Journal of Sociology 91 (November): 481-510.

Preston, Robert G. (1981) First Light: The Search for the Edge of the Universe. New York: Atlantic Monthly Press.

Schutz, Alfred (1964) Collected Papers II. The Hague: Martinus Nijhoff.

Sisk, Bridget, Lynn Maloney, and Joan Warnow-Blewett (1992) Report No. 3: Catalog of Selected Historical Materials. AIP Study of Multi-Institutional Collaborations, Phase I: High-Energy Physics. Center for History of Physics, American Institute of Physics.

Tolbert, Pamela S. (1988) "Institutional sources of organizational culture in major law firms," in Lynne G. Zucker (ed.), Institutional Patterns and Organizations: Culture and Environments, pp.101-113. Cambridge, MA: Ballinger Publishing Co.

Warnow-Blewett, Joan, and Spencer R. Weart (1992) Report No. 1: Summary of Project Activities and Findings/Project Recommendations. AIP Study of Multi-Institutional Collaborations, Phase I: High-Energy Physics. Center for History of Physics, American Institute of Physics.

Warnow-Blewett, Joan, Lynn Maloney, and Roxanne Nilan (1992) Report No. 2: Documenting Collaborations in High-Energy Physics. AIP Study of Multi-Institutional Collaborations, Phase I: High-Energy Physics. Center for History of Physics, American Institute of Physics.

Zucker, Lynne G. (1977) "The role of institutionalization in cultural persistence," American Sociological Review 42:726-743.

Zucker, Lynne G. (1986). "Production of trust: institutional sources of economic structure 1840 to 1920," Research in Organizational Behavior 8:53-111.

Zucker, Lynne G. (1993) Detecting Patterns in High Energy Physics: Sociological Analysis of AIP Census and Focus Experiments. Report to the American Institute of Physics.

Zucker, Lynne G., Michael R. Darby, and Jeff Armstrong (1994) Intellectual capital and the firm: The technology of geographically localized knowledge spillovers. Working Paper No. 4946. Cambridge, MA: National Bureau of Economic Research.

Zucker, Lynne G., Michael R. Darby, Marilynn B. Brewer, and Yu Sheng Peng (1995, in press) "Collaboration structure and information dilemmas in biotechnology: Organizational boundaries as trust production," in Roderick M. Kramer and Thomas Tyler (eds.), Trust in Organizations. Newbury Park, CA: Sage Publications.

Zucker, Lynne G., and Ita G. G. Kreft (1994) "The evolution of socially contingent rational action: effects of labor strikes on change in union founding in the 1880s," in J. A. C. Baum and J. V. Singh (eds.), Evolutionary Dynamics of Organizations (pp. 294-313). Oxford, UK: Oxford University Press.

Zucker, Lynne G. and Pamela S. Tolbert (1995, in press) "Institutional analyses of organizations: Legitimate but not institutionalized," in Stewart Clegg, Walter Nord, and Cynthia Harley (eds.), Handbook of Organizational Theory. London, England: Blackwell.

Attachment A-1
List of Variables Created

Coded variables

No. Variable Description
1. case #
2. alph. of project
3. Science: Space Science=1, Geophysics=2
4. Geophysics subtype: Space Sci.=0, Tectonics, Climate=1, Oceanography=2
5. # of countries involved
6. # of institutions in "lead" role
7. # of var. 6 insts.=univ.
8. # of var. 6 insts.=govt.
9. # of all institutions involved including lead and all insts. coded as "specific contacts on project staff, on other experiments, etc."
10. # of var. 9 insts.=univ.
11. # of var. 9 insts.=govt.
12. # of var. 9 insts.=other
13. # of AIP interviews
14. # of different disciplines as listed by interviewees re self
15. year initiate project
16. year funding began
17. year data collection began
18. year ended data collection
19. # of different funding sources
20. U.S. only: 1=yes; 0=no
21. U.S. govt.: 1=yes; 0=no
22. NASA: 1=yes; 0=no
23. NSF: 1=yes; 0=no
24. # of different U.S. govt. agencies
25. U.S. institutions: 1=yes; 0=no
26. # of U.S. institutions
27. U.S. industry: 1=yes; 0=no
28. # U.S. companies
29. # of different foreign sources
30. foreign govt.: 1=yes; 0=no
31. # of foreign govt. agencies
32. foreign institutions: 1=yes; 0=no
33. # of foreign institutions
34. foreign industry: 1=yes; 0=no
35. # of foreign companies
36. shared vehicle(s): 1=yes; 0=no
37. # of vehicles
38. vehicle(s) require coordination: 1=yes; 0=no
39. vehicle = ship (physical proximity): 1=yes; 0=no
40. any new instruments: 1=yes; 0=no
41. # new instruments constructed
42. 1 team only: 1=yes; 0=no
43. constructed across instit. boundaries: 1=yes; 0=no
44. constructed across national boundaries: 1=yes; 0=no
45. exciting design: 1=yes; 0=no
46. prior contact by 90%+ of interviewees: 1=yes; 0=no
47. analysis of PI data primarily: 1=yes; 0=no
48. analysis signif. involves coordinated data: 1=yes; 0=no
49. project has at least one explicit rule re data: 1=yes; 0=no
50. PI has exclusive use period: 1=yes; 0=no
51. min. time in months exclusive use
52. max. time in months exclusive use
53. archive in required location?: 1=yes; 0=no
54. coauthor rules explicit: 1=yes; 0=no
55. coauthor rules, initial pubs.: 1=yes; 0=no
56. required pub., e.g., if take samples: 1=yes; 0=no
57. pre-pub. review: 1=yes; 0=no
58. refuse to coauthor: 1=yes; 0=no
59. conflict over interpretation of data by others: 1=yes; 0=no
60. rules re data access: 1=yes; 0=no
61. other problems re data use: 1=yes; 0=no
62. rules re timing of pub.(e.g., no pub. before init. vol.): 1=yes; 0=no
63. conflicts over authority: 1=yes; 0=no
64. oversight is bureaucratic: 1=yes; 0=no
65. science level is too autocratic/bureaucratic: 1=yes; 0=no
66. oversight informal, empowering: 1=yes; 0=no
67. science level informal, empowering: 1=yes; 0=no
68. overall collab. style: 4=partnership/collegial, autonomy, self direction, respect for tech. judgement; 3=most proj. collegial, part autocratic; 2=most autocratic, part collegial; 1=autocratic/directive; little respect for tech. judgement of others
69. major discoveries: 1=yes; 0=no
70. interesting results: 1=yes; 0=no
71. new data: 1=yes; 0=no
72. overall success: 4=major initial objectives reached and main sci. purpose realized OR very important unexpected finding; 3=lots of data and articles, but major objectives not achieved; 2=lots of data, few pubs.; 1=little output from project
73. # responding on org. chart
74. # diff. countries of interviewees
75. high # of countries in chart
76. avg. # of countries in chart
77. # of interviewees with govt. level focus
78. # of interviewees with org. level focus
79. # of interviewees with scientific group level focus
80. high # of organizations in chart
81. avg. # of organizations in chart
82. high # of levels of authority in chart
83. avg. # of levels of authority in chart
84. high # of scientific review panels in chart
85. avg. # of scientific review panels in chart
86. administrative records centralized: 1=yes; 0=no
87. records scattered across organizations: 1=yes; 0=no
88. records scattered across individuals: 1=yes; 0=no
89. information flow: 4=records in centralized location and/or sub-project locations, at organizational level; 3=most records at official offices; some with individuals; 2=most records with individuals; some at official offices; 1=records retained by individuals and move with them
Transforms of coded variables 1-89
90. teamwork = 1 if var68 = 3 or 4 and 0 otherwise (i.e., var68 = 1 or 2)
91. coordination = var38 + var43 + var44 + var48
92. rules = var49 + var50 + var53 + var54 + var55 + var56 + var60 + var62
93. conflict = var58 + var59 + var61 + var63
94. bureaucratism = var64 + var65 + (1 - var90)
95. collegiality = var66 + var67 + var90
96. complexity = var76 + var81
97. pnetwork = var9/var13
98. leadcomplex = var5 + var6
99. profcomplex = var14/var13
100. fundcomplex = 1 if var19 > 1, 0 otherwise
101. admincomplex = var83 + var 85
102. hivalsci = 1 if var69 + var70 + var72 = 6, 0 otherwise
103. pipriv = 1 if var50 = 1 AND var52 > 12, 0 otherwise


Attachment A-2
Predictors of Number of AIP Interviews(Check for Selectivity Bias)
Dependent variable: 13—number of AIP interviews
Ind. Vars.
Model 1
Model 2
Model 2
Model 4
Model 5
Model 6
6: # lead
9: # all
14: # dif.
45: instrument
     excit. des.    
3: space sci.
   1; geoph. 2      
Adjusted R-squared
Standard errors are in parentheses below coefficients.
Prob(|t-statistic| > x): a<.1, b<.05, c<.01, d<.001.

(By Frederik Nebeker)
[Table of Contents]

The seismologist D.H. Matthews has written, "Remote sensing in the Earth sciences by geophysical techniques is particularly dependent on technological advances in electronics and engineering. When a new type of measurement becomes possible, results rush in, capsizing cherished theories[35]." This paper describes one very significant advance in geophysical instrumentation, the development of the very-broad-band (VBB) seismograph.

This new scientific instrument was designed "to record as accurately as possible three-dimensional ground motion over nearly the entire frequency and amplitude range seen in the far field of earthquakes[36]." It was meant to replace several traditional seismographs, each recording motion only in a narrow band of frequencies and in a restricted dynamic range. The VBB seismograph was designed in the early 1980s by Joseph Steim, a Harvard graduate student, and Erhard Wielandt, a professor at the Swiss Federal Institute of Technology. Steim soon built a prototype and in 1987 founded a company, Quanterra, to manufacture VBB seismographs. Steim designed the instrument to meet the specifications of a large-scale collaboration in seismology, the Incorporated Research Institutions for Seismology (IRIS), and Quanterra has made most of its sales to IRIS institutions. In a sense, then, the company is a spin-off of the IRIS collaboration.

The History of Seismographic Instruments
Early seismoscopes (for detection) and seismographs (for measurement) were intended for local earthquakes only. One of the first recordings of a teleseism (the signal of a distant earthquake) was made by Ernst von Rebeur-Paschwitz in 1889: two of his horizontalpendulum seismographs in Germany recorded an earthquake that took place in Japan[37]. By the mid 1890s horizontal-pendulum seismographs were in many observatories worldwide[38].

Over the past hundred years seismologists have made many improvements in their instruments. One of the most important dates from 1903 when Boris Borisovich Galitsyn used an electrical coil moving in a magnetic field as a sensor of earth motions; by thus transducing mechanical motion to electrical current, one could both amplify the signal and separate the sensor from the recorder[5]. The latter ability made it much easier to achieve environmental isolation for the sensor.

Because most seismic motions are extremely small, some form of amplification is necessary. The seismographs of the 1890s achieved this by mechanical means (by using a lever), but when combined with mechanical recording, as was usual then, friction limited the sensitivity of the instrument. Seismologists therefore turned to photographic recording of the mechanically amplified motion, or photographic recording of optically amplified motion (using an optical lever, as a mirror on a pendulum reflecting a light beam to a scale). Since, however, photographic recording is both problematic and expensive, seismologists hoped that amplification of the electrical signal would solve the problem. It proved, though, difficult to build electronic amplifiers suitable to the microvolt output of seismometers, especially at low frequencies[40]. In the 1950s and 1960s satisfactory amplifiers became available, but it was not until the integrated circuits of the 1970s that electronic amplification was clearly better than the alternatives[41].

One trend in seismograph development was eliminating or compensating for eigen-oscillations. A seismograph, like any mechanical system, tends to vibrate at certain frequencies when motion is imparted to it. In about 1898, E. Wiechert introduced viscous-damping to lessen the effects of the pendulum eigen-oscillations[42]. A much more satisfactory solution was eventually achieved through feedback. Negative force feedback may be used to keep the inertial mass close to its equilibrium position. This not only controls eigen-oscillations, but also gives the sensor a much more linear response[43].

Another trend in seismograph development was the restriction of bandwidth to create specialized instruments[44]. A principal objective was to reduce sensitivity to the constant microseismic vibration of the earth (which has periods in the range from 3 to 8 seconds), caused mainly by ocean waves striking continental margins[45]. Thus, although distant earthquakes produce oscillations with periods ranging from a few tenths of second to 20 or 30 seconds, seismographs were typically designed to respond either to the oscillations below the marine microseismic range or to those above it, hence the traditional division of the seismic spectrum into a short-period and a long-period band[46]. Another principal reason for specialized instruments is that analog recording by pen on paper has a dynamic range of only about two orders of magnitude. One can avoid losing significant signals by restricting the frequency band of an instrument, since the dynamic range of a restricted band is smaller than that of the entire spectrum[47].

One of the most important changes in seismology is the fairly recent move to digital data. Perhaps the greatest advantage of digital data is that a much greater dynamic range of the seismic signal may be thus recorded. As just mentioned, pen on paper recording can register a range of only some two orders of magnitude, and this is also the case with the traditional photographic recording[48]. Yet digital instruments of the early 1980s already had dynamic ranges of six orders of magnitude[49]. Another advantage of digital data is its increased communicability; any copy of the data is equivalent to the original. Moreover, digital data could be processed in different ways for different purposes. One could, for example, impose a frequency filter to obtain data comparable with that coming from a traditional narrow-band seismograph, and some signals, such as those from free oscillations of the earth, are very difficult to look for in traditional analog records but straightforward to detect in digital data[50].

It was for the last-named reason that the Harvard seismologist Adam Dziewonski began digitizing analog data in the 1960s[51]. Impressed by Dziewonski's findings concerning free oscillations, many other seismologists became interested in digital data. In the 1970s, interest in direct digital data (rather than digitized analog data) became widespread, and seismologists began to distribute direct digital data[52]. Digital recording was deployed worldwide for the first time in the High-Gain Long-Period Project in about 1970[53].

There were, of course, other technological advances, such as improvements in the electromechanical transducer and in the stability of power supplies. Particularly important were advances in recording technology. Magnetic tape recording, which began in seismology in the 1950s, allowed the recording of several channels of information on different tracks of a wide tape[54]. And in the 1960s, "digital recording on magnetic tape swiftly transformed the seismic reflection technique into the major tool used in the search for oil on land ...."[55] (In the following decade, seismic reflection profiling became a major research tool for academic geologists.)

Interest in seismology has also increased over the past hundred years, with the result that greater resources have been expended on the field. Concern about earthquakes and volcanoes, of course, continues. Two other important reasons for interest in seismology are the success of seismic prospecting (first demonstrated by the discovery in 1923 of a large oil field) and the need to identify underground nuclear tests (and distinguish them from earthquakes). The latter concern began as early as 1947, when the U.S. Army undertook the task of developing seismographic monitoring of such testing, and became much more urgent after the United States, the Soviet Union, and Great Britain agreed in 1963 to prohibit testing in the atmosphere, in space, and under water[56].

The Establishment of IRIS
Seismologists soon recognized the benefits of worldwide coordination of seismographic stations, especially to study very large earthquakes, to study earthquakes far from existing stations, to gain information about the deep structure of the earth, and to provide "the spatial and temporal baselines needed to calibrate and tie together the information collected by local and regional arrays[57]." As early as 1905, an International Association of Seismology was founded, and in the years around 1910 a uniform network of stations was established in North America[58]. In the early 1960s the World-Wide Standardized Seismograph Network (WWSSN) was established (132 stations equipped with standardized analog equipment), and data thus gathered "played a key role in seismology's contribution to the development and testing of the concepts of sea-floor spreading, continental drift, and plate tectonics[59]." In the 1970s, several sparse global networks with narrow-band, digitally recording instruments were established: 18 stations of the Seismic Research Observatory and Abbreviated Seismic Research Observatory network; 17 stations of the International Deployment of Accelerometers network; and 15 WWSSN stations upgraded with digital recording capability[60].

As valuable as these networks were, by the early 1980s many seismologists were quite conscious of their shortcomings. Principal among them was the use of narrow-band instruments, each for particular research or monitoring purposes; it was by that time clear that one could instead deploy broad-band instruments "not just for long-period seismologists or short-period seismologists, but for all seismologists."[61] Another major shortcoming was that there was no effective system for distributing the data[62].

In 1983 and 1984 there was established the Incorporated Research Institutions for Seismology (IRIS), "a private consortium of universities and research institutions sponsored and funded by the National Science Foundation[63]." One of two major projects of IRIS was the Global Seismograph Network (GSN). Dziewonski was the first chairman of the committee set up to plan the new network[64]. A technical committee for GSN was formed to consider what sort of instrumentation was appropriate. Joseph Steim, a graduate student of Dziewonski's at Harvard, served on this committee[65]. The design goals, which were published in the spring of 1985, listed the following technical requirements: digital data acquisition with near real-time data telemetry; dynamic range sufficient to resolve ground noise and to record the largest teleseismic signals; low noise instrumentation and environment; linearity; standardization of system modules; and, perhaps most important, bandwidth sufficient to record the entire spectrum of teleseismic signals[66]. The last named requirement has a history of its own.

Conception of the VBB Seismograph
Though the early seismographs of the 1890s were quite broadband, a long-standing trend in seismograph development, as mentioned above, was the restriction of bandwidth to create specialized instruments[67]. Interest in broad-band recording was renewed in the 1960s after magnetic tape-recording became available and after the introduction of electronic force feedback made it possible to realize a wide flat response[68]. In the early 1960s at the California Institute of Technology, a digital broadband seismograph was built, but did not attract much notice[69]. In 1972 in Czechoslovakia, A. Plesinger and J. Horalek put into service an analog seismograph having a flat response curve over a wide range of frequencies[70]. An array of digital broadband seismographs (known as the Graefenberg array) was installed in Germany in the 1970s, though the bandwidth and data quality were limited[71].

It was only slowly that the idea that a single general-purpose instrument could replace special-purpose instruments began to win adherents[72]. An important idea was that the broadband data can be digitally processed at the time of analysis in the manner appropriate for a particular problem, but this, of course, depends upon the availability of appropriate information-processing technology. In a 1986 paper Erhard Wielandt and Joseph Steim wrote, "Despite their potential advantages, broad-band seismographs have not been widely used. This is because a broad-band seismograph alone is not a very useful instrument. It requires playback and processing equipment to produce visible seismograms. Only digital recording and processing will permit the retrieval of a maximum amount of information. The computing facilities and software that make this practical are only now becoming available to many seismological observatories[73]."

In the early 1980s Wielandt and Steim argued that even the very-long-period signals could be included in the range of the broadband seismograph, and for this instrument Steim suggested the name very broad band (VBB) seismograph[74]. And in the 1986 paper cited above, they presented the design of a VBB seismograph having, for each component of ground motion, a single sensor and single digital data stream. Two principal advantages of such a system were described by Steim as follows: "... elimination of the bias introduced by viewing the earthquake source and wave propagation in the Earth through the filter applied by narrowband, low-dynamic-range instrumentation, and the elimination of the uncertainty when data taken from many different types of instruments must be combined[75]." The story of the development and commercialization of this instrument is connected with the story of the IRIS collaboration.

Development of the VBB Seismograph
Joseph M. Steim, the son of a professor of chemistry at Brown University, made a hobby of electronics in his school days. He became an amateur radio operator and while still in high school published an article in an amateur radio journal on a logic circuit to control a radio repeater system[76]. In his undergraduate years at Harvard University, Steim maintained his interest in electronics; for three or four semesters, beginning in about 1975, he worked with a senior graduate student to automate the operation of a vacuum ultraviolet spectrometer[77]. Also as an undergraduate, he worked three years as a teaching fellow in an electronics course for physicists taught by Paul Horowitz[78].

In his senior year, Steim took two courses from the seismologist Adam Dziewonski. For one of them Steim built an ultrasonic seismograph—an ultrasound transmitter and receiver—and used it to measure the times of arrival of acoustic waves reflected from a surface[79]. In his report on this device, Steim mentioned as an aside that a digital seismic recorder could be built with a microcomputer (microcomputers were just then becoming available). For the other course he wrote a paper on "Minimization of noise effects in electromagnetic seismographs"[80]. It presented "a general design for a digital recording electronic feedback longperiod seismograph in which the effects of noise have been reduced to a minimum."

Steim started graduate school in geophysics at Harvard in the fall of 1978. This required a good deal of course-work the first two years because of his relative lack of background in the geosciences[81]. From the beginning it was clear to his advisor, Adam Dziewonski, that instrumentation was his greatest interest[82]. In 1979 Steim began working with Erhard Wielandt, a seismologist from the Swiss Federal Institute of Technology, who was then spending a year at Harvard. From conversations with Wielandt, Steim became interested in developing a highly sensitive portable seismometer that would generate digital data, a device that the newly available microprocessor seemed to make possible. Dziewonski sought to obtain some funding for this project by including in a 1979 proposal to the National Science Foundation (NSF) a request for the money to purchase the equipment needed to construct two of the portable seismographs to be built by Steim[83]. The seismograph would combine a Willmore Mark IV electronic-feedback seismometer and a MC6800 microprocessor based data-acquisition system[84].

In 1980 Steim submitted a proposal for his Ph.D. thesis project: the development and deployment of a portable seismometer in order to investigate the global variability of the seismic properties of the earth's upper mantle[85]. The proposal was, in Steim's words, "a thinly veiled excuse to make a system to better collect [data][86]."

Over the next half dozen years, Steim continued to work with Wielandt, who, probably even more than Dziewonski, served as his mentor[87]. Wielandt, working with another graduate student G. Streckeisen, had shortly before developed a highly sensitive seismometer of relatively small size and broad response, the STS-1 leaf-spring seismometer[88]. In about 1981 Steim and Wielandt began working to increase the bandwidth of this device[89]. They redesigned not the mechanical part, but the system that controlled the feedback force. By the summer of 1983 they had succeeded, and this was, according to Dziewonski, one of Steim's two large achievements as a graduate student[90].

In 1981 Dziewonski and Steim submitted an NSF proposal for "Development and deployment of portable broad-band digital accelerometers[91]." They sought funding to continue the development of a portable broad-band digital seismograph. The proposal was not funded; some reviewers may have questioned whether such a device was technically feasible, others whether it was important to have broad-band response in a portable instrument[92]. This setback was certainly one factor in Steim's decision to turn to the development of a broadband observatory seismograph[93].

One of Steim's first objectives was to develop a very stable and sensitive DC amplifier[94]. It is extremely time-consuming to test the performance of an amplifier to be used for longperiod signals, and Steim spent perhaps a year on such tests[95].

In 1983 Steim, Dziewonski, and Wielandt submitted an NSF proposal, "Design and deployment of three very broad-band digital seismic stations[96]." They received money for only one instrument, instead of three, and for only two years, instead of three[97]. Work proceeded well, and at an American Geophysical Union meeting in December 1983 Steim presented some of his results on long-period, broad-band, and very-broad-band digital recording[98]. Steim argued that, with digital recording, the segmentation of the seismic spectrum is unnecessary, a message perhaps not welcome to two other speakers at the same session, one of whom talked about a network for intermediate wavelength seismology (NARS), the other about a long-period network (Geoscope).

In 1984 Steim and Dziewonski secured additional funding for Steim's work by reaching an agreement with the Italian national geophysical institute to construct two prototype very-broad-band (VBB) seismic stations with digital recording[99]. This work was a continuation of Steim's collaboration with Wielandt, and the instrument developed was a further modification of the Wielandt-Streckeisen STS-1 leaf-spring seismometer.

In 1983 and 1984 it was not clear that Steim and Wielandt's system—a high-dynamic range seismograph containing all seismic signals in a single output—was going to work[100]. But results achieved in late 1984 and early 1985 removed their doubts, and in May 1985 Steim and Wielandt presented their results at the AGU spring meeting[101]. The system was more fully described in an article in Annales Geophysicae the following year[102].

Available analog-to-digital converters had a dynamic range about two orders of magnitude less than needed for the VBB seismograph. For the first of the VBB seismographs designed by Steim and Wielandt, this problem was solved by gain-ranging. With this arrangement, however, the resolution of small signals was lost when large signals were being recorded. Steim hoped to be able to solve the problem by developing a new class of ADCs that would have the requisite dynamic range[103].

Steim investigated a 24-bit converter made by Gould Industries for use in oil exploration. He found that it did work effectively, but its cost of $50,000 made it too expensive for wide use in seismology[104]. In August 1985 in a telephone conversation with Wielandt, Steim got the idea for a new type of digital encoder, what he came to call the "Quantagrator[105]." In Steim's words, it "combines the basic principle of enhancement by noise differentiation and oversampling ... with an implementation of the offset-ranging technique[106]." He and Wielandt constructed a numerical model of the device, and soon thereafter built a working prototype[107]. The Quantagrator made it possible for the first time to achieve the goal of recording in a single data stream all the information of interest to seismologists (with the exceptions of some extremely high frequencies normally not of global interest and the extremely large accelerations near major earthquakes)[108].

The VBB system constructed by Steim at Harvard in 1985 covered a range of frequencies of four or five orders of magnitude and a range of amplitudes of eight to nine orders of magnitude in ground acceleration[109]. In a sense, this system became the prototype for the IRIS's Global Seismographic Network (GSN) stations; according to a U.S. Geological Survey report on IRIS, "Most of the important design features used in developing and configuring hardware and software for the GSN data system evolved from a very broadband (VBB) seismograph system conceived and developed at Harvard University by J. M. Steim (1986)[110]."

Not long afterwards, Steim made another important contribution to seismology: he demonstrated the feasibility of disseminating broad-band digital data by use of the Internet. Because in the mid 1980s personal computers had become common, he decided to implement the idea, which others had proposed in the early 1980s[111]. Called "Dial-A-Broad-Band- Seismogram," the service first became available in 1986, and Steim made it more widely known by presenting a talk on it at the spring 1987 AGU meeting[112]. Dziewonski calls this a revolutionary development[113]. Once it was clear that the instrumentation worked, it was adopted rapidly. The success of this telephone access to global observatory-quality VBB data led IRIS to drop its plan to use satellite telemetry for that purpose (which would have been much more expensive)[114].

The Founding of Quanterra
In 1987, Steim established Quanterra, Inc. as a company to manufacture a VBB seismograph. The first sales were of the digitizer alone, made to people who already had the Streckeisen seismometer[115]. The idea was to keep the price low by offering a standardized product. Steim had built the prototype Quantagrator himself at Harvard; the first ones made for sale were manufactured in Switzerland by Streckeisen's company, acting as subcontractor to Quanterra.

For a time Steim worked as consultant to Gould, Inc., and the technical proposal submitted by Gould in October 1986 to IRIS states, "The Gould IRIS/GSN system design will be based on the proven system developed in cooperation with Dr. J. M. Steim of Harvard University[116]." The relationship continued when Martin-Marietta acquired Gould. It was essentially Steim's system that Martin-Marietta sold, in 8 or 10 copies, to IRIS[117], and in 1988 Steim helped Martin-Marietta design a packetized communications link ("connecting a remote data acquisition computer to a recording site") for IRIS[118]. Then, as his own company began to compete with Martin-Marietta for IRIS business, Steim ended this consultancy[119].

In 1987 Steim's system was adopted for the new stations of the Geoscope project (a network of seismographic stations with headquarters in Paris) beginning in 1987[120]. Other early customers were the California Institute of Technology and MEDNET. Steim sold software (for recording the data) along with the hardware[121]. For the first year or so, the digitizer and the software were the company's only products; then Quanterra began assembling the computers to run the software[122].

In 1989 Steim designed a new product, the Q680—a compact, low-power, 6-channel 24-bit system with a sampling rate of 80 hertz—and the first models were delivered the following year[123]. The Q680 was selected for the United States National Seismographic Network (resulting from a 1987 agreement between the U.S. Geological Survey and the National Research Council), which became operational in 1991[124].

At the end of 1989 Steim succeeded in raising half a million dollars in capital in order to expand the company, and with the new Q680 system the company had much greater control of all aspects of production, either doing the work itself or subcontracting locally[125]. Steim also wrote a data-compression algorithm. This algorithm came to be widely used; indeed, all IRIS-2 systems made use of it for all their data[126].

Many seismologists, long accustomed to analog seismic recordings, found the digital data less informative, so in about 1992 Quanterra introduced an "inverse digitizer" that produces an analog output from a digital seismograph[127]. Then with the inverse digitizer and an appropriate software filter, the Q680 system could emulate conventional seismographs.

Knowing the exact time of a recorded seismic signal has always been a problem for seismologists. With the advent of the Global Positioning System (GPS), which uses satellite transmissions to allow precise determination of position and time, the problem could be solved by incorporating a GPS receiver in the seismograph, and Quanterra began doing this in about 1992[128].

Quanterra has sold hundreds of systems that are now deployed in dozens of countries worldwide. In the early 1990s some competing companies appeared, and Quanterra has begun collaborating with one of them, each company providing part of the final product[129]. Steim says that he wants to keep the company small for stability and continued profitability. He regards as a model for his company the one founded almost twenty years ago by G. Streckeisen to manufacture seismic sensors. Streckeisen has deliberately restricted growth, maintaining business at a level of a couple of million dollars in annual sales company. The stability is valuable to the customers in that they can count on a consistent product and on product support over the long lifespan of the instrument[130].

The Quanterra VBB seismograph illustrates a remarkable trend in recent instrumentation: electronics and computers make possible the design of a general-purpose instrument that can perform a great variety of specific functions by means of appropriate programming.

For example, during and after World War II there was a rapid development of analog computing machines, ones for solving a particular class of differential equations, ones for designing airfoils, ones for simulating electric-power grids, and so on. By 1970 most of these special-purpose computers had been replaced by the general-purpose electronic digital computer. Appropriate software makes the general-purpose device perform as a specialpurpose device.

In music, an electronic synthesizer can be made to sound like almost any of the traditional instruments.

In astronomy in the recent decades, charge-coupled devices, having a wide dynamic and frequency range, have been used to build quite general-purpose telescopes. For example, the Wide Field/Planetary Camera of the Hubble Space Telescope allows for the taking of different sorts of images, which result from ground-control of the telescope and computerprocessing of the received data[131].

The Hewlett-Packard 1991 Test & Measurement Catalog lists hundreds of instruments[132]. Most of them have specific capabilities suited to particular applications, but some instruments are designed to be general-purpose, such as the High-speed General Purpose Oscilloscope HP 54111D, the Universal Time Interval Counter HP 5370B, and the HP 6651A Power Supply.

A general-purpose device that takes on specific functions by programming has several advantages over special-purpose hardware. The users may employ the device in two or more different ways by use of different software, and may take advantage of some advances in the way data are taken by acquiring new software. By writing the software themselves, the users can customize the data-acquisition process. And—a very important point—a standard product that is produced in large numbers can be manufactured for a smaller unit cost.

The story of Quanterra not only illustrates this trend, but shows also how small companies can be extremely effective in disseminating technological advances that meet scientific needs. If Steim had not commercialized his product—even if he had developed it within IRIS for that collaboration's instrumentation—the new technology would have been adopted more slowly[133].

The story of Quanterra is part of the general movement in science and technology in recent decades from analog to digital data. It shows also an interesting technological convergence: the technology developed for an observatory seismograph turned out—because of its low cost, small size, and modest power requirements—to be adaptable for portable seismographs[134]. Finally, the story of Quanterra provides an excellent example of technological advance in response to scientific needs[135].

[Table of Contents]

Since World War II, the organizational framework for scientific research is increasingly the multi-institutional collaboration. However, this form of research has received slight attention from scholars. Without a dedicated effort to understand such collaborations, policy makers and administrators will continue to have only hearsay and their own memories to guide their management; even the records necessary for efficient administration, for historical and management studies, and for posterity, will be largely scattered or destroyed.

The Center for History of Physics of the American Institute of Physics, in keeping with its mission to preserve and make known the record of modern physics and allied sciences, is working to redress this situation with a three-stage investigation into areas where multi-institutional collaborations are prominent. The study began in 1989. Phase I, which focused on high-energy physics was completed in 1992[136]. Phase II, which addressed collaborative research in space science and geophysics, is completed with this report. Phase III, now underway, will focus on comparative studies of other fields in science and technology and general questions of documentation policy and practice.

The goal of the study is to make it possible for scholars and others to understand these transient "institutions." In order to locate and preserve historical documentation, we must first get some idea of the process of collaborative research and how the records are generated and used. Hence, we are making a broad preliminary survey, the first of its kind, into the functioning of research collaborations since the mid-1970s that include three or more institutions. Our study is designed to identify patterns of collaborations and define the scope of the documentation problems. Along the way, we are building an archives of oral history interviews and other resources for scholarly use. The AIP Center will make use of its findings to recommend future actions and promote systems to document significant collaborative research.

We focus on major research "sites." In high-energy physics, sites are accelerator facilities; in space science and geophysics, they are research vehicles (spacecraft and ocean-going vessels) or other systems for data gathering (such as drill holes and seismic networks and arrays). During the study of space science and geophysics, we conducted close to 200 interviews with academic and government scientists and administrators involved in 14 selected case studies. Qualitative analysis of these interviews provides a foundation for generalizing about how scientists view the process of collaborative research and on where they think records of historical value may be found. In addition, we gave attention to industrial subcontracting because of the managerial issues this practice poses and the further dispersal of records it implies. We also conducted "special perspective interviews" with persons such as program officers of funding agencies and discipline policy makers, who have special information of value to our understanding of collaborative research.

Interim reports on archival, historical, and sociological findings issued at the end of each phase will culminate in final reports and recommendations at the end of the long-term study. Other resources developed throughout the study, including oral history recordings and transcripts, will be available at the AIP Center's Niels Bohr Library. Working in cooperation with institutional archivists, we will also locate and assist with the preservation of records at appropriate repositories, field-testing possible approaches and solutions. Indexed information on all these collections will be made widely available to scholars.

The project is directed by Joan Warnow-Blewett with the assistance of Spencer R. Weart. Joel Genuth has served as project historian and Lynn Maloney, Janet Linde, and Anthony Capitos as project archivist. The main consultants for Phase II were historians Robert Smith (space science), Naomi Oreskes (geophysics), and Frederik Nebeker (former project historian and now conducting a joint AIP-IEEE study of contracting), archivist Deborah Cozort Day, and sociologist Lynne Zucker.

Support for this phase of the project was provided by the Andrew W. Mellon Foundation, National Historical Publications and Records Commission at the National Archives and Records Administration, and the National Science Foundation. Additional support from the Andrew W. Mellon Foundation made it possible for the AIP project team to conduct the parallel study of the European Space Agency and funded international travel.

Changes in project archivists along with the relocation of the AIP headquarters affected the efficiency of the project. Lynn Maloney left her position as project archivist in early February 1992. Janet Linde was chosen to be the new project archivist and began her tenure in April 1992, but resigned her position rather than relocate to College Park when the AIP headquarters moved to its present location in October 1993. In mid-March 1994, Anthony Capitos began work as project archivist.

The study of collaborations in space science and geophysics was aided by a Working Group which included expert scientists, historians, archivists, and sociologists. The members are listed in Attachment C-1.

The Working Group for Space Science and Geophysics met at the AIP headquarters building in New York on 8-9 November 1991 and on 21-22 May 1993. As in our previous study of high-energy physics, the main purpose of the first meeting of the Working Group was to acquaint the space scientists and geophysicists with the goals and methodology of the project and to have the scientists acquaint project staff and other members of the Working Group with characteristics of their scientific disciplines. There was extensive discussion of the differences between high-energy physics and the several disciplines we were studying in space science and geophysics. Three major differencesCthe greater diversity, the greater complexity, and the longer "pre-history" of projects in space science and geophysicsCled to the decision to select fewer case studies and not to attempt probes (in-depth studies of specific projects).

Prior to the second meeting, each member of the Working Group received substantial reports on preliminary archival and historical-sociological findings; the reports made possible less formal staff presentations and more group discussions. There was, for example, some discussion as to when and whether to compare space science and geophysics with high-energy physics. It was concluded that we should treat the space science and geophysics fields first of all on their own terms and only then draw comparisons with high-energy physics. A major portion of the second Working Group meeting was a review of staff findings for each of the selected case studies in to discuss the quality of the findings and to prioritize the list of remaining interview subjects. The group felt there were several areas in which the project should strengthen its historical-sociological findings. These included a fuller examination of the role of NASA Headquarters and its working groups in initiating the projects and setting their scope, and the intellectual or international factors in the creation and operation of collaborations. Accordingly, later interviews concentrated on individuals with administrative responsibility for collaborations. They were likely to know or have insights into the perspective of NASA Headquarters, to have contacts with international or military organizations, and to be sensitive to the presence or lack of intellectual cohesion in collaborations.

The broadest level of study for Phase I of the study of multi-institutional collaborations was the creation of a census of data on projects with names of all participants and their institutional affiliations and the publications produced by the projects. Despite the expenditure of much effort, project staff were unable, in Phase II, to find existing electronic databases that could serve as the basis for a comparable census useful for historical or sociological study of either space science or geophysics.

A. Selection of Case Studies
Following the recommendation of the Working Group to conduct a thorough investigation prior to our final selection of case studies, Maloney (with the assistance of Genuth) made extensive efforts to assemble information and literature on over 30 projects that were candidates for our case studies. These efforts included discussions with members of the AIP Study's Working Group, administrators at the NSF, NASA, and other agencies, and principal investigators of candidate projects. Files for each candidate project were assembled. A meeting at AIP with consulting historians Robert Smith and Naomi Oreskes was held in February 1992 review candidate projects. We were concerned, among other things, to avoid recent cases that scientists would not talk about candidly and to insure that our sample would expose us to a wide range of variables that could affect records creation. Twelve projects were selected to serve as our case studies for space science and geophysics. They cover a range of scientific disciplines, observation platforms, and institutional participants, and the same year span as those for high-energy physics (from 1973 to the near present). See Report No. 2, Part A: Space Science, Section 1: "Selected Case Studies in Space Science" and Part B: Geophysics and Oceanography, Section 1: "Selected Case Studies in Geophysics and Oceanography" for information on the selected case studies.

B. Selection of Individuals to be Interviewed
Genuth spent most of three months contacting leaders of the chosen projects through phone, e-mail, and correspondence to collect the information needed to define interview programs. In order to understand later stages of the projects (experiments, oceanographic legs, etc.), it was found necessary to include interviews on the infrastructure and records creation during the formative (pre-funding) stage. This formative stage of projects in space science and geophysics has typically been lengthy, political, multi-institutional, and often multi-national.

We planned to conduct the same number of hours of interviews on our 12 case studies as we did for those in high-energy physics (over 400 hours); this translates into roughly 200 interviews. Ten of these interviews were reserved for special-perspective interviews with key policy-makers in these disciplines, to give an overview of the discipline which might be missed during the case study interviews. The remaining interviews were dedicated to the 12 case studies. The study of contracting to industry involved another roughly 25 interviews. What appears to be a luxuryCallowing an average of 16 interviews per case studyCis a necessity for the fields of space science and geophysics. Even with this many interviews, we usually had to define some kind of sub-set of the project for detailed examination, in order to be able to cover both high-level administrative issues and working-level details. For example, our interview program on Voyager concentrated on four of ten experiments (instruments), and our program for the Deep Sea Drilling Project (and its successor, the Ocean Drilling Program) focused on two of the more than 100 legs that have been conducted.

C. Development of Question Sets for Interviews
Prior to the November 1991 meeting of the Working Group, draft interview question sets were drawn up, with the help of our consultants, and distributed. After discussions by the Working Group, the question sets were further revised to reflect the differences in terminology among the several fields and to capture the different range of practices that appear to be prevalent in space science and geophysics. These question sets were still primarily applicable to interview subjects close to activities related to instrumentation. Because a good fraction of the interviews would be with policy makers and administrators, we produced several variations on the question sets to capture the broad range of roles and specialties among our interviewees. Two question sets (one for space science and the other for geophysics and oceanography) were developed to interview principal investigators and their teams. Additional question sets were prepared for interviews with scientists and engineers at NASA Headquarters and flight centers and with other policy makers and administrators. A copy of the question set for policy makers is in Attachment C-2.

D. Interviewing Activities on Selected Projects
Between March 1992 and May 1994, we made 24 major field trips, in addition to nearby visits. Appointments were made whenever possible with the archivists at each of the interviewee's institutions to discuss the project's documentation goals, the particular situation of the interviewee's files, and the current policies of the institutional archives.

Transcribing of the tape-recorded interviews, under the direction of the AIP Center's administrative assistants, continued throughout the project.

E. Historical and Archival Analysis of Interviews

Work began on the historical and archival analysis of interviews during the fall of 1992. To index interview transcripts, we developed a form covering historical themes and archival issues. With help from a graduate student, Martha Keyes, project staff were able to index interview transcripts quickly and efficiently. Project historian Genuth was also aided by graduate students in his historical analysis of the indexed transcripts. All 192 transcripts were indexed and analyzed for their historical and archival content. A copy of the indexing form is in Attachment C-3. For further information, see Report No. 2, Part A: Space Science, Section 2: "Historical-Sociological Report" and Section 3: "Archival Findings and Analysis" and Report No. 2, Part B: Geophysics and Oceanography, Section 2: "Historical-Sociological Report" and Section 3: "Archival Findings and Analysis."

F. Sociological Analysis of Interviews
Lynne Zucker, professor of sociology at UCLA, carried out a sociological analysis of the interviews with the aid of her graduate students. For further information, see Report No. 2, Appendix A: "Sociological Analysis of Multi-Institutional Collaborations in Space Science and Geophysics."

A parallel study of the CERN laboratory in Geneva, conducted during our work on high-energy physics, proved useful; therefore we repeated this approach for space science by doing a parallel study of the European Space Agency (ESA). Two historians of science, John Krige and Arturo Russo, who are currently under contract to write a history of ESA, helped to develop strategies to study three projects: one ESA-sponsored collaboration and two collaborations jointly sponsored by ESA and NASA. One of our originally selected space science case studies, the International Ultraviolet Explorer, was expanded to include study of the European participation, while case studies were added of the Giotto project and the International Sun-Earth Explorer. Krige and Russo conducted about one-half of the interviews for Giotto and ISEE collaborations; the AIP project staff conducted the balance and analyzed all the transcripts.

The Center for History of Electrical Engineering of the Institute for Electrical and Electronic Engineers joined with us in supporting Frederik Nebeker to conduct a study of contracting to industry. It was decided that the Quanterra Corporation and the development of Very-Broad-Band Seismography should be the focus of the our investigation of industrial research. Nebeker prepared a general report based on the AIP Study's interviews and conducted interviews of principals at the Quanterra Corporation, a major provider of electronic components for seismological research. Nebeker's general report on contracting has been drawn on freely in various of these reports; his focused report, "The Development of Very-Broad-Band Seismography: Quanterra and the IRIS Collaboration," is in Report No. 2, Appendix B.

We conducted perspective interviews, outside of the selected case studies, to supply missing pieces from the broader viewpoints of community leaders such as administrators at funding agencies and international organizations. Altogether 11 formal perspective interviews were conducted along with informal visits to other administrators and scientists.

A. Basic Activities
There are a number of steps the project has taken during the study of space science and geophysics in order to have an impact on the records-keeping practices of archivists and scientists. From the start, our interviews with scientists were conducted at their home institutions so that we could review their files and also meet with as many as possible of the institutional archivists or records officers to talk about project goals and their current archival programs. These meetings with scientists were, virtually without exception, the first time anyone had discussed with them the potential historical value of their papers. Based on previous experience of the AIP Center, we believe these discussions will have a positive impact on care of records. The meetings with those responsible for records strengthened the AIP Center's cooperative ties, gave us "grass roots" information on the likelihood of saving records of multi-institutional collaborative research, and in return let us provide information and encouragement.

More specifically, the question sets used for interviews with senior physicists and other members of the collaborations were designed with archival goals in mind. The question set in Attachment C-2 shows how each step of the collaborative process was covered, from prefunding initiatives through publication of research results. There was considerable emphasis on organizational and social issues that impact on records, such as communication patterns, delegation of responsibilities, degree of bureaucratization, impact of computer technology, role of internationalism, and the use of subcontracting to industry. Further issues relating directly to archival matters were those of records creation, use, and reuse for scientific purposes. We were particularly keen to capture information about electronic records, especially the use of e-mail. Information specific to records was entered into a database for analysis by Anthony Capitos.

B. Archival Information Database
The Working Group advised during its first meeting that we would gather more information on the record-keeping practices of our interview subjects if we left them with a questionnaire form to fill out. Unfortunately, the questionnaire approach was far less successful than we had hoped. In addition to the fact that many subjects failed to send in their questionnaires, those that were received were inconsistent in their coverage. The return rate of these questionnaires was 47.4% (37.8% response for our space science case studies and 62% for geophysics). Although this is close to half, switching to the questionnaire format lost the personal description and in-depth answers to inquiries during interviews that the previous phase enjoyed. The interviews proved to be essential for the archival information database. Its close to 200 records include information concerning the interview subject's personal record-keeping practices as well as information about the location and types of records the collaboration produced. In summary, information in the archives database was drawn from both questionnaires and interviews. It was used to help identify trends or locate gaps in the documentation of our case studies. A copy of the archival database form is in Attachment C-4.

C. Archives Site Visits
The major institutional settings we have encountered for projects in space science and geophysics have been academia, government laboratories (including space flight centers), government-contract laboratories, and corporate laboratories. Because of the complexity and variety of the institutional settings that we would encounter in this phase of the project, we decided to develop question sets for our meetings with archivists and records managers at the various institutional settings. We developed three versions: one to be used for meetings with an archivist, one for meetings with a records manager, and one for meetings at which both an archivist and a records manager of the institution are present. A copy of the questionnaire for combined archivists and records managers is in Attachment C-5.

To supplement the information from the interview subjects concerning the administration and planning of NASA space science projects and the functions of various offices, Warnow-Blewett and Capitos visited several individuals in managerial positions at NASA. These included NASA discipline scientists, division chiefs, and project scientists. Meetings were also held with records managers at both the Headquarters and flight center levels as well as with managers of the NASA History Office. Discussion concerning the new NASA Records Retention Schedule gave insight into the direction of NASA records management.

Along with these NASA meetings, Warnow-Blewett, Capitos, Genuth, and Anderson visited records managers and administrators in various agencies associated with our geophysics case studies including the NSF, National Oceanic and Atmospheric Administration, and Joint Oceanographic Institutes, Inc. These meetings were followed by visits to the National Archives' appraisal archivists to discuss the current records retention policies of the government agencies involved with our projects.

D. Archival Analysis
Our archival analysis covers a wide range of information on records. The topics include patterns of records creation, use, and reuse by the collaboration as well as patterns of records retention and destruction. We also report on the locations where valuable sets of records are likely to be found, which will provide opportunities for preservation recommendations that appear "natural" to our records creators.

The archival analysis has been based on all aspect of our workCthe historical-sociological analysis of interviews; site visits; questionnaires from scientists, archivists and records managers; and our archival database. "Archival Findings and Analysis" for space science is Report No. 2, Part A: Space Science, Section 3 and for geophysics and oceanography, Report No. 2, Part B: Geophysics and Oceanography, Section 3.

E. Appraisal Guidelines
The appraisal guidelines were developed initially out of analyses of the structures and functions of the multi-institutional collaborations and policy groups. First drafts prepared by Anthony Capitos and Joan Warnow-Blewett were based on the interview transcripts; further additions and refinements were derived from Genuth's historical-sociological findings. Site visits with records officers, scientists, and administrators provided key insights and guidance. After a review by project consultants Smith, Oreskes, and Day, the revised appraisal guidelines were distributed to the project's Working Group for further comments and corrections. The "Appraisal Guidelines" for records of collaborations in space science are in Report No. 2, Part A: Space Science, Section 4; for geophysics and oceanography, Report No. 2, Part B: Geophysics and Oceanography, Section 4.

F. Catalog of Source Materials
Throughout the long-term study, we have aimed to preserve the valuable documentation we encountered, working in cooperation with institutional archivists. These records, along with our oral history interviews, will be cataloged for the AIP International Catalog of Sources for History of Physics and Allied Sciences and shared with RLIN-AMC, the major online archival database. Accessibility of information on resource materials will help to foster research on aspects of multi-institutional collaborations by historians, sociologists, and other scholars.



AIP Working Group for Documenting Multi-Institutional Collaborations in Space Science and Geophysics

Main Consultants

Dr. Robert Smith
Smithsonian Institution

Dr. Naomi Oreskes
Dartmouth College

Ms. Deborah Day
Scripps Institution of Oceanography

Prof. Lynne Zucker
Dept. of Sociology

Archival Representatives
Ms. Helen Samuels

Dr. Anne Millbrooke
Archival Consultant

Dr. Sharon Gibbs Thibodeau
National Archives & Records Administration

Mr. Richard McKay
National Archives & Records Administration

(Federally-Funded Research & Development Centers)
Ms. Victoria Davis
Fermi National Accelerator Laboratory

Others on theWorking Group
Prof. Arthur Davidsen
Dept. of Physics and Astronomy
The Johns Hopkins University

Prof. Peter Galison
Dept. of the History of Science
Harvard University

Prof. C. Stewart Gillmor
Dept. of History
Wesleyan University

Prof. Lowell Hargens
Dept. of Sociology

Mr. Robert Heinmiller

Dr. John Krige
Dept. of History & Civilization
European University Institute

Dr. Frank McDonald
Institute for Physical Science & Technology
University of Maryland

Prof. Chandra Mukerji
Dept. of Sociology
Univ. of Calif. at San Diego

Dr. John Naugle
National Aeronautics & Space Administration (retired)

Dr. Frederik Nebeker
Center for the History of Electrical Engineering
Rutgers - The State University

Dr. William Nierenberg
Scripps Institution of Oceanography

Dr. Arthur Nowell
School of Oceanography
Univ. of Washington

Mr. Kenneth Pedersen
School of Foreign Service
Georgetown University

Dr. John S. Perry
Atmospheric Science Board
National Academy of Sciences

Dr. Charles Prewitt
Geophysical Laboratory
Carnegie Institute of Washington

Ms. Mary Martha
National Academy of Sciences (retired)

Dr. Jeffrey D. Rosendhal
Astrophysics Division
NASA Headquarters

Prof. Arturo Russo
Instituto di Fisica
Universita di Palermo

Prof. James Van Allen
Dept. of Physics & Astronomy
University of Iowa

Prof. Harriet Zuckerman
The Andrew W. Mellon Foundation


Ms. Joan Warnow-Blewett
Project Director

Dr. Spencer R. Weart
Associate Project Director & Chair, Working Group

Mr. Joel Genuth
Project Historian

Mr. Anthony Capitos
Project Archivist

Question Set For Policy Makers (Revised June 1992)


I'd like to ask you about work on the ____________ mission, which began in 19__.

Here are a list of some of the participants and a list of some of the publications reporting results.

The AIP study is archival as well as historical, so I'll be asking some questions about the availability of documents or other materials that might be of use to future historians of science. There will also be questions about the social organization of the mission, the science produced, and the instrumentation involved.

First, some questions about yourself:

1. Where, when, and in what did you get your highest degree? If a Ph.D., ask What was the topic of your thesis?

a. How did you get involved in this particular field/subfield?

2. When did you become involved in the policy matters and the affairs of the [fill in name of organization or committee]

3. What attracted you to this sort of work?

a. Did you feel obliged by a sense of civic duty to the scientific community?

b. Did you view the opportunity as a way to have a broader impact on the conduct of research?

c. Did you view the opportunity as a way to advance your professional interests in your institution or your discipline?

4. What previous appointments have you held? Have you kept a continuous set of professional files over the course of your career?

Let's talk about the committee or agency that you work on

5. Who appointed or recruited you into this position?

6. What do you understand your charge to be?

a. Do you consider your responsibilities or powers to be too narrow or too broad?

b. Have you tried during your service to alter the scope of your powers or responsibilities?

7. Who constitutes the principal audience for your work?

a. Have you ever felt you were addressing too narrow or too broad a range of people?

b. Have you ever tried to restrict or expand the level of interest in your work?

8. How is the agenda for your work typically determined? To whose needs are you responding when you take up an issue? Have the influences on your work changed over time?

a. Who has been well positioned to influence the directions you have taken?

b. What reports or correspondence have you been especially inclined to study seriously?

c. Whose advice have you been inclined to solicit when dealing with ambiguous or uncertain matters?

9. Did the committee organize itself internally to carry out its work? [If yes] [No relevant question for agency people.]

How was the committee organized and what position did you hold?

10. Did the committee keep any official records of its deliberations? [If yes] Do your or did your office files include your communication with outside scientists?

a. Who was responsible for the records, and where do you think they now reside? [Who is or became responsible for your records and where do you think they now reside?]

b. Do you think the official records document the range of opinions within the committee? Do they transmit any of the "flavor" of the committee's deliberations? [Do your office files document the range of opinions within the scientific community on your office's policies?]

11. Did you keep any unofficial records of the committee's work? [Did you keep any personal records of your office's work?]

a. Did your preparation for committee meetings include writing down your thoughts for your own benefit? [Did you make a record of your thoughts in some sort of diary or personal notebook?]

b. Were you prone to take notes during meetings, and if so, have you saved them? [Did you take notes at meetings for later personal reference?]

c. Might you have discussed the committee's business in correspondence with other committee members? With the committee's advisees? With friends in the scientific community? [Might you have discussed your office's business in personal correspondence with friends inside or outside the scientific community?]

d. Did you notice anyone else on the committee who tended to take notes? [Did you notice anyone else who worked in your office with any of these traits?]

12. How did the committee reach decisions? [Can you describe for us the character and extent of your discretionary authority?]

e. Were votes taken to decide disputed issues? [Did you ever feel "burned" by post-facto disputes over decisions you made?]

f. Was the committee expected to reach a consensus? [Did you ever feel pandered to by those from whom you sought constructive criticism?]

g. Did the committee chairperson have any powers to decide or resolve disputed issues? [Did you ever feel obliged to refer an issue to a superior out of a sense of procedural propriety?]

12. How did the committee make its views known? [How did you make your decisions and their justifications publicly known?]

13. Have the committee's findings ever been disputed or rejected by either the direct recipients of the committee's advice or others in the scientific or science policy communities? [Did any of your decisions inspire others in the scientific or science policy communities to mount a serious effort to have the decision overturned?] [If so]

a. How has the committee responded? [Through what channels did your office respond?]

b. Was there much private communication about the issues? [Ask the same question!]

c. Did the committee change its manner of doing business as a result of the dispute? [Did your office change its manner of doing business as a result of the dispute?]

Now I want you to focus specifically on yours and the committee's influence on policy regarding [fill in name of project]

14. How frequently do multi-institutional research projects or proposed multi-institutional projects generate business for you? Has there been change over time?

15. What was the committee's [your office's] role with respect to _____? Were you planning a program? Judging a proposal? Reviewing operations of an ongoing program?

Interviewer must make a judgement about interviewee's answer and ask questions appropriate to the interviewee's understanding of the committee's role. Suggested questions aimed at mission planning begin with 15; questions aimed at proposal judging begin with 26; questions aimed at operations begin with 36.


16. Who put significant effort into planning this program? How did they communicate among each other? Did they become a formally constituted group or were they an informal network?

17. What precedents existed to this kind of mission or program? [If none, ask the sub-questions; if answered positively, go to next question]

a. What experiences in research or administration did you bring to the effort to plan _____?

b. In what ways were they similar to and different in scope from the mission or program envisioned for _____?

c. What did you feel had been satisfying in your previous experiences and applicable to _____?

d. What did you feel had been lacking or ill-conceived in your previous experiences and better handled in _____?

18. Were you directly involved in these precedents? If so, in what capacities? If not, were others involved with the planning of _____ directly involved in these precedents?

19. In what ways did these precedents shape planning for _____? Were there aspects of the precedents that seemed especially worth emulating? Aspects that seemed especially worth reforming?

20. In what ways did _____ pose novel issues for planning?

21. How long was the period from the time _____ was first seriously discussed to the time there was a commitment to fund the program?

22. To what extent and at what stages did concerns over the funding agency's priorities or Congressional or White House interests enter into consideration?

a. How were these concerns transmitted to the planners? Is there any documentation?

b. Was dealing with these concerns resented as inappropriate to planning a research venture? Accepted as a fact of life in publicly-funded research? Enjoyed as an opportunity to politic at a high level?

23. Were there issues that provoked widespread agreement among the planners, and issues that provoked differences of opinion? [If so]

a. Why did some issues provoke different reactions?

b. How were differences settled? By debating until a consensus was reached? By compromising? By vote?

24. Did those charged with or who assumed the burden of planning rely on their own resources or did they consult with others about the problems of planning _____? If consultations took place, what records exist?

25. Have you served in a similar capacity for other programs? [If so] How would you compare your role and effectiveness in the programs you served? What accounts for the similarities and differences?


26. Who put significant effort into evaluating the proposal(s) for _____? How did they communicate among each other? Did they become a formally constituted group or were they an informal network?

27. Did you personally know about _______ prior to the consideration you gave to it as a member of the committee [as an official of your agency]? [If so]

a. Did anyone in particular bring it to your attention?

b. Was it the object of widespread discussion or speculation in the scientific community?

28. What links, if any, did you have with the proponents of _______?

a. Were any of the proponents students or postdocs of yours?

b. Had you previously collaborated with any of the proponents?

c. Had you previously worked or studied in the same institution as any of the proponents?

29. How was ______ brought to the committee's [the program office's] attention?

a. Did a funding agency [advisory panel] request that ______ be reviewed in the context of a more general review of the agency's options and policies?

b. Did _____'s proponents request a review in the hopes of building support for their ambitions?

c. Did a committee member or members [agency employee] put _______ on the committee's [agency's] agenda in the belief that the program needed higher-level stimulation to advance?

30. What else was on the committee's [program office's] docket at the time _____ was reviewed? Who was responsible for those other items being before the committee [agency]?

31. To what extent and at what stages did concerns over the funding agency's priorities or Congressional or White House interests enter into consideration?

a. How were these concerns transmitted to the reviewers? Is there any documentation?

b. Was dealing with these concerns resented as inappropriate to reviewing research proposals? Accepted as a fact of life in publicly-funded research? Enjoyed as an opportunity to politic at a high level?

32. What criteria, explicit and implicit, did you feel were being employed in evaluating the programs?

a. Were there differences of opinion within the committee [agency] on what the criteria for evaluation should be?

b. Was there anything about the _____ project or the other items on the committee's [program office's] agenda that provoked debate over the proper criteria for evaluation?

33. I assume _____ exists in part because the committee recommended it. Who championed _____ within the committee?

a. What were the arguments that made _____ appear better than other projects before the committee?

b. Who, if anyone, was reluctant or resistant to recommending _____? What were the grounds for their position?

c. What roads were not taken as a result of the pursuit of _____? With the benefit of hindsight, do you think the committee's consideration of _____ was sound in procedure and sound in substance?

34. Did your work on this committee lead to any enduring role in _____? Did it lead to any further work with other committee members? Did other members of this committee play further roles in _____ or continue to work together?

35. Have you served in a similar capacity for other programs? [If so] How would you compare your role and effectiveness in the programs you served? What accounts for the similarities and differences?


36. Who put significant effort into reviewing the operations of _____? How did they communicate among each other? Did they become a formally constituted group or were they an informal network?

37. Did you personally know about _______ prior to the consideration you gave to it as a member of the committee [as an official of your agency]? [If so]

a. Did anyone in particular bring it to your attention?

b. Was it the object of widespread discussion or speculation in the scientific community?

38. What links, if any, did you have with the proponents of _______?

a. Were any of the proponents students or postdocs of yours?

b. Had you previously collaborated with any of the proponents?

c. Had you previously worked or studied in the same institution as any of the proponents?

39. What aspects of this program's operations needed your advice? Instrumentation? Data collection? Data analysis and interpretation? Management and administration?

40. What made these aspects of the program problematic for the people who were to carry out the program? What qualified you to serve in an advisory capacity?

a. Were there technical ambiguities or difficulties that required specialized expertise that was unavailable to program personnel?

b. Did divisions of opinion about these aspects within the program make the solicitation of outside advice desirable?

41. To what extent and at what stages did concerns over the funding agency's priorities or Congressional or White House interests enter into consideration?

a. How were these concerns transmitted to the advisors? Is there any documentation?

b. Was dealing with these concerns resented as inappropriate to advising researchers? Accepted as a fact of life in publicly-funded research? Enjoyed as an opportunity to politic at a high level?

42. In what ways did the program's advisors seek to serve the program? Did you make recommendations to the program's implementers? Did you lobby for the program? Did you provide the program's implementers with contacts or referrals?

a. What did you consider most important in your efforts to help this program?

b. Did advisors agree among themselves on what would be helpful?

c. Did advisors and advisees agree on what would be helpful?

43. What materials did you need in order to carry out your work? Did you always receive what you needed? Did you keep what you received?

44. What criteria, explicit and implicit, did you feel were being employed in evaluating the program's operations?

a. Were there differences of opinion among the advisors on what the criteria for evaluation should be?

b. Was there anything about the _____ project or the other items on the committee's [program office's] agenda that provoked debate over the proper criteria for evaluation?

45. What did you do that most helped the program fulfill its promise?

46. Do you feel that in any way you failed or disappointed the program?

47. Have you served in a similar capacity for other programs? [If so] How would you compare your role and effectiveness in the programs you served? What accounts for the similarities and differences?

Let's finish up by returning to your personal concerns

48. How do individual contributions to the helping a program get recognized?

a. Were there aspects of your contribution which were under-recognized? Over-recognized?

b. Did anyone else, to your knowledge, feel their contribution went inadequately recognized? Do you feel that anyone else received much more or less recognition than deserved?

c. What personal qualities are most important for being effective on a committee [a successful career in public administration of research]?

49. What specific aspects of the experiment or mission led to either good or bad science?

50. Is there any topic you'd like to return to?

51. Is there anything I didn't ask that you would like to talk about?

a. What obstacles do you feel you've overcome to get this far in your career?

b. Did your personal or family life have to be adjusted to accommodate your work?

We will analyze this interview anonymously as part of the AIP project on multi-institutional collaborations. Do we have your permission to keep the transcript in AIP archives, after the completion of the project, with your name identified, for the future use of scholars?

* * * * *

[Using the checklist, note amount of material, approximate dates, physical condition, content, informational value, whether original or copy, andCfor machine-readable recordsCwhether the required hardware and software are still available.] Finally, a last few questions about records:

52. Let's estimate the volume of your records on this mission.

53. In how many different locations do you keep papers relating to this mission?

54. Have you kept most of them or thrown most of them away?

Historical and Archival Indexing Form

Historical Issues

Interview subject _____________________________ Project__________________

Formation of Project

General Organization and Management of Collaboration [relations among PIs, Flight Centers, and NASA HQ]

Organization of Teams and Development of Experiments [Instruments]

Funding Patterns


Data Analysis

Publication and Dissemination

Length and Logistics of Program

Archival Issues

Archival Database Form

Project Name:
Position on Project:
Current Employer:
Employer during Project:
Uses e-mail:
Saves e-mail:

Institutional Setting

Individual has papers? (general):

Individual has project files?:

Specific records kept:

Records created but not kept:

Other location of specific records:

Name of project member having records:

Start Date:


Questionnaire for Archivists and Records Managers

1. What type of institution is ____________________ (e.g., government agency, government contract lab, university (private or state), corporation)?

2. What is the governing body for ____________________?

Records manager:
3. How has the operating authority of your records management program been established (e.g., by statute, agency directive, mission statement, charter, articles of incorporation)?

a. Does the operating authority make the program mandatory?

b. Does the operating authority clearly assign authority and responsibility for the program to a single official of your institution?

4. To what administrative unit in your institution do you report? (Get a copy of the organization chart, if possible.)

a. What is the title of the person to whom you report?

b. Is the records management program sufficiently staffed?

c. What % of your time do you spend on records management?

d. What are your job responsibilities?

e. What is the size of your staff? Professionals _____? Support staff _____? (Are these FTEs?)

f. Is there a liaison network for records activities throughout the institution?

g. Has there been/do you expect there to be any growth in the funding and/or staffing for your program?

5. How has the operating authority of your archives been established (e.g., by statute, executive order, mission statement, charter, articles of incorporation)?

a. To what administrative unit in your institution do you report?

b. Is the archives program sufficiently staffed?

c. What is the size of your staff? Professionals _____? Support staff _____?

d. Has there been/do you expect there to be any growth in the funding and/or staffing for your program?

6. Are you responsible for records from any other institution than _____________________?

a. Have any of your institution's records been transferred to or deposited in another institution for long-term or permanent retention?

b. Do you lend records?

Records Manager:
7. When was the records management program initiated?

8. When was your archives program initiated?

9. Do you deal with any federal records?

10. Where are records administered by your programs stored?

Records manager:
11. Do you create your own records schedules to fit the specific records created by your institution?

a. If yes, who has to approve them?

b. Who actually determines how long records will be retained (records creators, files administrators, etc.)?

c. Does this institution have any workshop(s) to train files administrators in record keeping, files maintenance and scheduling of records?

12. Who determines which records become part of the archives?

Records manager:
13. Has a survey been done of records produced by your institution?

14. Do you have a vital records program?

a. Does it address only emergency operating records or also rights and interests records?

b. Are vital records stored off-site?

c. If in microform, is equipment available to read them?

Records manager:
15. Are audiovisual, cartographic, and electronic records (i.e., audio and video tapes, maps, computer tape) covered by the procedures of your records management program?

16. Are there any special storage/handling procedures in place for these records?

17. Do scientists keep their own files in their offices or are they maintained by administrative staff?

a. Does your program provide guidance on the difference between "personal professional" files and institutional records?

b. Have you had any problems in this area?

18. Are project or experiment files kept together physically as case files or intellectually (e.g., by code no. or budget line)?

19. Are records created by scientific projects or experiments treated differently in any way from administrative files (i.e., scheduled/retained by different people, etc.)?

20. How would you or a researcher go about locating all the records related to a specific project or experiment?

21. If a project or experiment at this institution were to be ranked as highly significant, would your program permit retention of its files?

a. If yes, what would be the process?

b. If no, could the files be saved elsewhere?

c. Has this ever happened?

Records Manager:
22. What is your policy regarding the records of contracts, involving either work by your institution or work for your institution, with:

a. private industry?

b. government or national labs/government agencies?

c. academic institutions?

23. Is data from experiments at your institution fed in to any data archives?

a. Where are these data archives?

b. What kinds of records are generated as a result of these transactions: are any administrative records created that document the transfer; does your institution receive back any data reports or compilations from the data archives?

24. To what extent are records created by your institution, such as correspondence, minutes, interoffice memoranda, experimental logbooks, personal notes, etc., kept in electronic form?

a. Is there a policy regarding electronic records?

b. What is the source of this policy?

c. Is documentation created and kept?

d. Are electronic records scheduled? w/documentation?

e. Are any of these items retained and/or transferred to the archives in electronic form rather than hard copy?

f. Is current r.m. of electronic records adequate?

g. Has training in the management of such records been provided?

h. Is r.m. staff involved in the design of new electronic systems?

i. Is there an inventory of electronic records?

j. How are records protected from deterioration, obsolescence or accident?

k. Are there backups?

l. What are your most pressing problems in this area?

25. Has the increased use of the FAX machine, with the frequently poor quality and rapid deterioration of the copies produced, had an impact on records retention and/or the preservation of permanent records in your institution?

a. Are documents transmitted by FAX routinely discarded or are they treated as other correspondence?

b. If retained, are these documents copied on to acid-free or other paper or transferred to some other format?

26. Are any of your records microfilmed or transferred to any other non-paper medium (including optical disk)?

a. When is this process carried out?

b. Who carries out this process?

c. Who approves the transfer?

d. Is any finding aid (e.g., an index) produced as a part of this process?

e. If microfilm, is archival quality film produced?

f. Do you keep a master negative, copy negative and use copies?

g. Where and how are the various copies stored?

h. Are the originals discarded after copying?

i. What kind of verification process takes place prior to any disposal?

j. Is there periodic inspection of the condition of the film?

k. Are the microforms scheduled?

l. Are finding aids produced to individual series of records on film?

m. Is there an overall inventory of records in microform?

n. Is training provided in the care and management of such records?

o. Are any COM, CAR, or optical disk systems in use?

p. To what extent are they hardware/software dependent?

q. How are restricted records handled?

We are particularly interested in the records of projects involving three or more institutions which could be academic institutions, government laboratories, or private corporations. Your institution might have had one person involved in such a project, as a PI or a member of the team, or several individuals, or you might have provided an instrument or site for such a project.

27. Does your institution's involvement in collaborations result in the records of those projects being retained by another institution and/or to your institution's taking custody of all or most of the records of a project involving more than one institution?

28. Are you preserving records of other institutions?

29. We would like some information about the following types of records, either those generated by your institution or those generated by collaborations: whether they are kept temporarily, on a long-term (25 years or more) basis or are preserved permanently; also, is this decision based on the type of record or the importance of the project?



Administrative records:
Inventories of apparatus?
Progress and final reports?
Financial reports?
Contracts w/other institutions:
     Letters of agreement?
     Memoranda of Understanding?
     Contractor files?
Intracollaboration mailings, including
      minutes, technical reports, and
      other memoranda?
Press releases?
Photographs of
Other audiovisual records
     (i.e., audio and video tapes)?
Technical records:
Instrument designs and specifications?
Experiment Logbooks?
Data analysis records?
Notebooks of individual scientists
Other professional files of individuals?

Let's talk about access:

30. Do any of the following concerns enter into decisions regarding access to records in your custody: National security? Proprietary rights?

31. Is any review process applied to records in your custody before public access is allowed?

a. Is this an internal review or are some reviews carried out by personnel from another institution?

b. Are reviews carried out by more than one unit in your institution?

c. Is the review process used dependent on factors such as the age of the records?

32. What kinds of finding aids, lists, indexes, etc. are available?

33. Are these aids created by the records creators or by the archives/records management program?

34. Are your finding aids open to the public without review?

35. Are your collections reported to any national, international, or regional database?

36. Does your institution produce any lists of its publications (documents, journal articles, books, etc.)?

37. Does your institution have an oral history program?

a. If yes, what unit within your institution carries out this program?

b. If no, has any other organization conducted oral histories with staff of your institution?

[Table of Contents]

AMPTE: Active Magnetospheric Particle Tracer Experiment, one of the AIP case study projects.

AO: Announcement of Opportunity, a statement from a funding agency announcing that scientists are welcome to propose experiments or Explorer-class projects and describing the engineering conditions that proposing scientists should meet. (See "Experiment" and "Explorer," below.)

APL: The Applied Physics Laboratory/Johns Hopkins University

Discipline Scientist: A scientist employed by NASA Headquarters to administer a program of research and development grants for a disciplinary community and to help develop ideas for space science projects of interest to that community.

ESA: European Space Agency

ESTEC: European Space Research and Technology Centre, the lone ESA space flight center.
Experiment: The design, construction and operation of a scientific instrumentCgenerally consisting of sensor(s) plus electronics for instrument operation and signal amplificationCthat is flown on a spacecraft; plus processing, interpreting, and disseminating the data that are telemetered back.

Explorer: A class of NASA projects funded in series from an established line-item in the NASA budget for smaller scientific missions. Explorer projects are not directly scrutinized by Congress or the Office of Management and Budget.

GSFC: Goddard Space Flight Center, a NASA space flight center. below)

HEAO: High Energy Astrophysical Observatory. HEAO is usually followed by a number or the word "program" to designate one or the entire series of three satellites launched to study cosmic sources of high-energy particles and radiation.

Instrument: See Experiment.

Inter-disciplinary Scientist: A scientist who, by designation of NASA Headquarters, serves on a project's SWG, performs multi-experiment data analyses, but does not contribute an experiment.

ISEE: International Sun-Earth Explorer, one of the AIP case study projects.

IUE: International Ultraviolet Explorer, one of the AIP case study projects.

JPL: The Jet Propulsion Laboratory, a space flight center managed for NASA by the California Institute of Technology.

MOWG: Management Operations Working Group, a committee of scientists that advises a discipline scientist on the needs of the discipline, including the value of possible space science projects. (Scientists on MOWGs include employees of NASA space flight centers as well as those from other institutions.)

MSFC: Marshall Space Flight Center, a NASA space flight center.

NAS: National Academy of Sciences

NASA: National Aeronautics and Space Administration

NSSDC: National Space Science Data Center

OSSA: Office of Space Science and Applications, the division of NASA that supports space science projects.

PI: principal investigator, the scientist in charge of an experiment team and accountable for the conduct of an experiment. PIs often make decisions on the level of engineering risk to assume in the pursuit of scientific capabilities and on divisions of labor within a team for building an instrument or analyzing data.

Program Manager: An engineer at NASA Headquarters who manages the budget and schedule for multiple projects. Oversees the project managers at the space flight centers responsible for building the projects. Program managers have influence in the composition of AOs, the politics of the NASA budget, and decisions to change the budget or scope of a project.

Program Scientist: A scientist at NASA Headquarters who advises the program manager on scientific effects of managerial issues. The program scientist has often been the discipline scientist who helped form the project. Program scientists have influence over the selection of scientific instruments to fly on a project and represent the interests of a project's participating scientists to officials at headquarters in the event that the participating scientists and engineers cannot resolve intra-project conflicts.

Project Manager: The engineer at the space flight center responsible for the design, construction, and operations of a spacecraft. The project manager is the principal authority on the project's budget and schedule and allocates the spacecraft's limited engineering resources among its sub-systems, including the payload of scientific instruments. Until launch, the project manager is the most powerful person in a space science project.

Project Scientist: A scientist, usually an employee of the space flight center managing the project andCin NASACusually a PI for an experiment on the project, who advises the project manager on the engineering needs of the participating scientists. Project scientists also chair the meetings of the Science Working Group (see below), which is the forum for participating scientists to discuss their common problems and plans. Project scientists can appeal a project manager's decision to the program scientist. After the spacecraft is launched, project scientists control funds to support analyses of project data.

RFP: Request for Proposals, a statement from a funding agency or space flight center to stimulate submission of proposals for the purpose of letting a contract for a spacecraft.

Space Flight Center: An institution, usually government-managed, for research and development into spacecraft designs and management of spacecraft construction. Space flight centers manage science projects and usually include research scientists on their staffs.

Study Scientist: Discipline scientists may hold the title "study scientist" during the planning stages of a mission.

SWG: Science Working Group, the group that sets the detailed science strategy for a project and discusses common or collective problems to the science of a project. The SWG's core is always the PIs, and its meetings are always chaired by the project scientist; other scientists or engineers participate in SWG meetings as the core members or agency headquarters deems appropriate.

[Table of Contents]

Chief Scientist: The administrator who oversees the day-to-day operations of a project organized as a consortium.

Co-chief Scientists: In oceanography, the scientists responsible for overseeing data acquisition on a research vessel. The term is also appropriate for the leaders of ad hoc research teams that form to use instrumentation provided by a consortium.

COCORP: Consortium for Continental Reflection Profiling, one of the AIP case study projects.

Consortium: The term customarily applied to an alliance of institutions seeking to introduce into academic geophysics a data-acquisition technique developed in industry or other scientific fields. Geophysics research consortia do not always incorporate, but the term's connotations of formality and permanence are appropriate.

DSDP: Deep Sea Drilling Project, one of the AIP case study projects.

Experiment: Either the activities of a scientist and their team members to develop a measuring technique, use the technique to acquire data in a multi-institutional project, and analyze the data for publication; or the activities of a research team that uses a consortium's instrumentation to acquire and process data on which individual members hope to publish papers.

GISP: Greenland Ice Sheet Project, one of the AIP case study projects.

ICSU: International Council of Scientific Unions, comprised of the national academies or other appropriate scientific institution of member nations.

IRIS: Incorporated Research Institutes of Seismology, one of the AIP case study projects.

ISCCP: International Satellite Cloud Climatology Program, one of the AIP case study projects.

JOI: Joint Oceanographic Institutes, a formally incorporated consortium of oceanographic institutions. JOI contracts with the NSF to manage ODP and subcontracts most responsibilities to the ODP project office at Texas A&M Research Foundation.

JOIDES: Joint Oceanographic Institutes for Deep Earth Sampling, an unincorporated consortium that started with four American oceanographic institutes and has expanded both domestically and internationally. JOIDES panels have set the scientific agenda for DSDP and ODP. JOIDES's headquarters moves regularly among member institutions and currently is in Cardiff, England.

NAS: National Academy of Sciences

NASA: National Aeronautics and Space Administration

NIST: National Institute of Standards and Technology

NOAA: National Oceanic and Atmospheric Administration

NSF: National Science Foundation

ODP: Ocean Drilling Program, the continuation, under international auspices, of DSDP.

ONR: Office of Naval Research

Parkfield: A rural site on the San Andreas Fault and the shorthand name for the Parkfield Earthquake Prediction Experiment, one of the AIP case study projects.

PI: principal investigator, the scientist responsible for the conduct of an experiment in the first sense of "Experiment " (see above). We view PIs as functionally equivalent to "co-chief scientists" for the second sense of "Experiment."

PICO: Polar Ice Coring Office, an NSF contract institute to develop and deploy ice drills for scientific use. The PICO contract is currently with the University of Alaska, Fairbanks.

Program Manager: A scientist who manages a funding agency's program of grants and contracts for research and development.

SMO: Science Management Office, the most common of the terms used for an office that takes responsibility for the logistics and other communal business for a project built around the common interests of several PIs in the joint coordination of their experiments. One of the PIs in the project directs the SMO.

SWG: Science Working Group, the most common of the terms used for the group of PIs whose experiments a project is coordinating.

Team: Either the cluster of people, who can include postdocs, graduate students, engineers, technicians, and executives or employees of businesses producing scientific instrumentation, that gather around a scientist working on an experiment; or the several researchers that together use a consortium's instrumentation to acquire data.

UNESCO: United Nations Educational, Scientific and Cultural Organization

USGS: United States Geological Survey

WCR: Warm Core Rings, one of the AIP case study projects.

WCRP: World Climate Research Programme, an office jointly supported by WMO and ICSU for the coordination of international climatology projects.

WMO: World Meteorological Organization, a branch of UNESCO

WOCE: World Ocean Circulation Experiment, one of the AIP case study projects.


[1] For more information on these projects, see Part A: Space Science, Section 1: "Selected Case Studies in Space Science" and Appendix D: "Space Science Acronyms and Glossary" of this report.

[2] More recently, it appears that flight centers have become less important for initiating projects. Officials at both NASA Headquarters and Goddard Space Flight Center cite the importance of "discipline scientists" at NASA Headquarters and their "Working Groups," committees of flight-center and external scientists, as the initial proposers of desirable projects.

[3] Robert W. Smith, The Space Telescope: A Study of NASA, Science, Technology, and Politics (New York: Cambridge University Press, 1989).

[4] The Applied Physics Laboratory (APL), which is administered by Johns Hopkins University, does support in-house spacecraft design and construction. The APL, however, is a product of logistical convenience in research contracting during World War II. It subsequently became an independent sub-division within the university rather than part of the academic departments.

[5] The name and precise responsibilities for this office have changed over time, but it has always been responsible for NASA's science projects.

[6] "Working Group" is a loosely and frequently used term in space science. In this section, it refers to standing or ad hoc advisory committees of scientists. In the next section, which analyzes the organization of fully formed projects, "Science Working Group" refers to the scientists who design and build a project's experiments plus others the scientists choose to include in their collective meetings.

[7] Congressional and presidential activities and the records they produce are beyond the scope of this project. Documents from this administrative level are likely to survive.

[8] There are other variables, such as the size of the spacecraft or the number of scientific instruments in its payload, that may also account for projects varying in whether they adopt a multi-PI or single-PI structure. Had it been possible for the AIP study to construct a data set with information on projects, participating scientists, and publications, such hypotheses could conceivably have been well tested. The generalization offered here is also likely time-specific. The single-PI projects, as will be seen, were more difficult for the flight centers to manage, and later outside instigators of projects may have found anything other than a multi-PI structure to be a political impossibility.

[9] An exception to this statement is that the program scientist for one project helped the participating scientists to negotiate a relaxation in the reporting requirements that the project manager had been accustomed to demanding from his experience on manned missions.

[10] The PI in the other case was not interviewed; we make no assumption about his views on the changes he was required to make in his instrument.

[11] The third center built a component with only a general bearing on the others.

[12] The echelle spectrograph on IUE, uv-to-optical converter and SEC Videocon camera on IUE, and solid state spectrometer on Einstein were the examples brought up in interviews.

[13] See Section III.A., above.

[14] See Section II., above.

[15] In the United States, this scientist was also usually a PI; in ESA projects, this scientist never was a PI.

[16] See Haas, Joan K., Helen Willa Samuels, and Barbara Trippel Simmons, Appraising the Records of Modern Science and Technology: A Guide. (Cambridge, Mass.: Massachusetts Institute of Technology, 1985), and, also, Joan N. Warnow with Allan Needell, Spencer Weart, and Jane Wolff, A Study of Preservation of Documents at Department of Energy Laboratories. (New York: American Institute of Physics, 1982).

[17] The retention of e-mail and other electronic records is an issue which many archivists are currently grappling with. For a copy of the Society of American Archivists' position statement on electronic records, see "Archival Issues Raised by Information Stored in Electronic Form," Archival Outlook (May 1995): 8-9.

[18] Two interesting reports on changes in technologies and their impact on research are "Report of the APS [American Physical Society] Task force on Electronic Information Systems," Bulletin of the American Physical Society 36, no. 4 (April 1991): 1119-1151; and Michelson, Avra and Jeff Rothenberg, "Scholarly Communication and Information Technology: Exploring the Impact of Changes in the Research Process on Archives," The American Archivist 55 (Spring 1992): 236-315.

[19] The categories of functions have been based upon Haas et al., op. cit. note 16.

[20] Formerly the National Academy of Sciences' Space Science Board.

[21] Report of the Steering Committee for the Study on the Long-Term Retention of Selected Scientific and Technical Records of the Federal Government (Washington, DC: National Research Council, 1995).

[22] For more information on these projects, see Part B: Geophysics and Oceanography, Section 1: "Selected Case Studies in Geophysics and Oceanography" and Appendix E: "Geophysics and Oceanography Acronyms and Glossary" of this report.

[23] See Part Two, Section I.A., above.

[24] The stress on the connections among experiments backfired on these instigators in their proposal for a follow-up project. When reviewers declared one experiment not worth doing, the rest of the proposals lost their justification.

[25] The effort sadly appears to have fallen victim to changes in computer technology and the failure of anyone to migrate the project's data bank from one system to another. Any analysis of multiple data streams from now on would have to be constructed from the community data banks to which the PIs were individually required to send their data.

[26] An apparent exception to this statement involves the pan-European ice-coring project in Greenland.

[27] Chandra Mukerji, A Fragile Power: Scientists and the State (Princeton, NJ: Princeton University Press, 1989).

[28] See Part Two, Section V., above.

[29] They made the move because they disliked having to raise soft money for their salaries, felt confined by the specialized labors that comprise research, or became enamored of the opportunities in administration to exercise their intellects and influence their science.

[30] The categories of functions have been based upon Hass, Joan K., Helen Willa Samuels, and Barbara Trippel Simmons, Appraising the Records of Modern Science and Technology: A Guide (Cambridge, Mass.: Massachusetts Institute of Technology, 1985).

[31] Report of the Steering Committee for the Study on the Long-Term Retention of Selected Scientific and Technical Records of the Federal Government (Washington, DC: National Research Council, 1995).

[32] Without the efforts of four research assistants working on the UCLA team, this research could not have been completed. We are indebted especially to the heroic efforts of Maximo Torero who conducted the analysis of the data under very difficult time constraints, and also to the detailed and accurate coding of interview material by Richard Johnson, Richard Bernard, and Christine Beckmaan.

[33] In international projects, each country tends to share data only nationally.

[34] We argue that while it is reasonable for the relevant community to feel that the magnitude of resources spent on the development and deployment of Einstein should entitle them to share in its use, it is also legitimate for the PIs to feel that they did not reap reasonable rewards from Einstein given their investment of their own talents under the assumption that they would have a longer period of exclusive use and stronger claims to time after that period than they in fact received. Again, it is a question of the proper alignment of incentives.

[35] Drum H. Matthews, "Profiling the Earth's Interior," 1990 Yearbook of Science and the Future (Chicago: Encyclopaedia Britannica, 1989), pp. 178-197; quotation on p. 189.

[36] Joseph M. Steim, The Very-Broad-Band Seismograph (Ph.D. dissertation, Harvard University, 1986), p. 1.

[37] James Dewey and Perry Byerly, "The Early History of Seismometry (to 1900)," Bulletin of the Seismological Society of America 59 (1969): 183-227, 208.

[38] Steim 1986 (cited above), p. 8.

[39] Ben S. Melton, "Earthquake Seismograph Development: A Modern History," in two parts, Eos 62 (1981): 505-510, 545-548.

[40] Ibid., p. 545.

[41] Ibid., pp. 545-546, and Joseph M. Steim, transcription of an interview conducted by Frederik Nebeker, 25-26 January 1994 (Center for the History of Physics, American Institute of Physics, 1994) [hereafter referred to as Steim interview], pp. 9-11.

[42] Dewey 1969, p. 219.

[43] Steim 1986, p. 11.

[44] Ibid., p. 9.

[45] Melton 1981, p. 505.

[46] Erhard Wielandt and Joseph M. Steim, "A Digital Very-Broad-Band Seismograph," Annales Geophysicae 4 (1986): 227-232

[47] Steim interview, p. 5.

[48] IRIS (Incorporated Research Institutions for Seismology, Inc.), Science Plan (Washington D.C.: IRIS, 1984) [hereafter IRIS 1984], p. 13.

[49] Ibid.

[50] Adam Dziewonski, transcription of an interview conducted by Frederik Nebeker, 26 January 1994 (Center for the History of Physics, American Institute of Physics, 1994) [hereafter referred to as Dziewonski interview 1994], p. 20.

[51] Steim interview, p. 25.

[52] Adam Dziewonski, transcription of an interview conducted by Joel Genuth, 4 October 1993 (Center for the History of Physics, American Institute of Physics, 1993) [hereafter Dziewonski interview 1993], p. 8; and Steim interview, p. 2.

[53] Steim 1986, p. 11.

[54] Matthews 1989, p. 186.

[55] Ibid.

[56] Melton 1981, p. 505.

[57] IRIS 1984, p. 7.

[58] Ibid.

[59] Ibid., pp. 7-9.

[60] Ibid., p. 9.

[61] Dziewonski interview 1993, p. 17.

[62] Ibid., pp. 8-9.

[63] Jon Peterson and Charles R. Hutt, "IRIS/USGS Plans for Upgrading the Global Seismograph Network," U.S. Geological Survey Open-File Report 89-471 (1989, revised 1993), p. 1. See also Dziewonski interview 1993, p. 37.

[64] Peterson 1993, p. 1.

[65] Dziewonski interview 1993, pp. 39-40.

[66] IRIS (Incorporated Research Institutions for Seismology, Inc.), The Design Goals for a New Global Seismographic Network (Washington DC: IRIS, 1985) [hereafter IRIS 1985], p. 3.

[67] Steim interview, p. 8.

[68] Erhard Wielandt and Joseph M. Steim, "A Digital Very-Broad-Band Seismograph, Annales Geophysicae 4 (1986): 227-232, 227.

[69] Steim interview, p. 4.

[70] Wielandt 1986, p. 227.

[71] Steim 1986, p. 5.

[72] Dziewonski interview 1993, p. 11.

[73] Wielandt 1986, p. 227.

[74] Dziewonski interview 1994, p. 10.

[75] Steim 1986, p. 1.

[76] Steim interview, p. 29.

[77] Ibid., p. 31.

[78] Ibid., p. 36. In 1980, Paul Horowitz and Winfield Hill published The Art of Electronics (Cambridge University Press), a textbook that has been widely adopted.

[79] Steim interview, p. 33, and Dziewonski interview 1994, p. 3.

[80] Typescript manuscript dated 22 May 1978 (Steim papers).

[81] Steim interview, p. 37.

[82] Dziewonski interview 1993, p. 7.

[83] NSF proposal "Elastic and Anelastic Structure of the Earth's Interior from Free Oscillation and Surface Wave Data," PI Adam Dziewonski, dated 22 May 1979 (Steim papers). The proposal states that Harvard would provide the funds for a third seismometer.

[84] NSF proposal, p. 44.

[85] Typescript Ph.D. proposal, no date (ca. 1979) (Steim papers).

[86] Steim interview, p. 38.

[87] Dziewonski interview 1993, p. 12.

[88] Steim 1986, p. 56.

[89] Steim interview, p. 18.

[90] Dziewonski interview 1993, pp. 13-14, and Dziewonski interview 1994, p. 8.

[91] NSF proposal "Development and Deployment of Portable Broad-Band Digital Accelerometers," PIs Adam Dziewonski and Joseph Steim, dated 21 August 1981 (Steim papers).

[92] Steim interview, pp. 40-42.

[93] Ibid., p. 42, and Dziewonski interview 1994, pp. 6-7.

[94] Dziewonski interview 1994, p. 8.

[95] Steim interview, p. 27.

[96] NSF proposal "Design and Deployment of Three Very Broad-Band Digital Seismic Stations," PIs Adam Dziewonski, Erhard Wielandt, and Joseph Steim, dated 9 September 1983 (Steim papers).

[97] Dziewonski interview 1994, p. 9.

[98] Eos 64, no.45 (1983): 767.

[99] Letter, Dziewonski to Enzo Boschi, 31 December 1984 (Steim papers), and Steim interview p. 43.

[100] Steim interview, p. 47.

[101] Eos 66, no.18 (1985): 312.

[102] Erhard Wielandt and Joseph M. Steim, "A Digital Very-Broad-Band Seismograph," Annales Geophysicae 4 (1986): 227-232.

[103] Steim 1986, p. 78, and Steim interview, pp. 48-49.

[104] Letter, James L. Frederick to David Stein, 4 January 1988 (Steim papers).

[105] Ibid.

[106] Steim 1986, pp. 136-137.

[107] Ibid., p. 85.

[108] Dziewonski interview 1993, p. 14.

[109] Steim 1986, p. 1.

[110] Peterson 1993, p. 5.

[111] Steim interview, pp. 12-13.

[112] Eos 66, no.27 (1987).

[113] Dziewonski interview 1994, p. 9.

[114] Letter, Dziewonski to Richards, 29 December 1988 (Steim papers), also Steim interview, p. 13.

[115] Steim interview, pp. 61-63.

[116] Gould, Inc., "Technical Proposal: Development and Production of the Global Seismic Network for the Incorporated Research Institutions for Seismology," 31 October 1986.

[117] Dziewonski interview 1994, p. 17.

[118] Quanterra Inc., "Quanterra, Inc.: The Innovators in Broad-Band Seismic Instrumentation" (promotional brochure), 1990.

[119] Steim interview, p. 51.

[120] Project GEoscope, project description (Institut de Physique du Globe de Paris, ca. 1987), p. 35.

[121] Steim interview, p. 64.

[122] Ibid., p. 68.

[123] Ibid., p. 69, and Quanterra, Inc. 1990 (cited above).

[124] Buland 1992.

[125] Steim interview, pp. 71-72.

[126] Peterson 1993, p. 13, and Ray Buland, "United States National Seismograph Network," IRIS Newsletter (Summer 1992): 4-6.

[127] Steim interview, pp. 69-70.

[128] Ibid., p. 69.

[129] Ibid., pp. 74-75.

[130] Ibid., pp. 79-80.

[131] See Robert W. Smith and J. N. Tatarewicz, "Replacing a Technology," Proceedings of the IEEE 73 (1985): 1221-35 and Robert W. Smith, The Space Telescope (Cambridge: Cambridge University Press, 1989), especially Chapter 7. I thank Joseph Tatarewicz for calling this example to my attention.

[132] Hewlett-Packard Company, 1991 Test & Measurement Catalog (Santa Clara, CA: Hewlett-Packard, 1990).

[133] Dziewonski interview 1994, pp. 17-18.

[134] Steim interview, pp. 75-76.

[135] For an example from high-energy physics, see Frederik Nebeker, "Report on Subcontracting and the LeCroy Electronics Corporation," in Center for History of Physics, AIP Study of Multi-Institutional Collaborations, Phase I: High-Energy Physics, Report Number 4: Historical Findings of Collaborations in High-Energy Physics (New York: American Institute of Physics, 1992), pp. 135-142.

[136] See AIP Study of Multi-Institutional Collaborations. Phase I: High-Energy Physics. New York: American Institute of Physics, 1992. Report No. 1: Summary of Project Activities and Findings / Project Recommendations, by Joan Warnow-Blewett and Spencer R. Weart. Report No. 2: Documenting Collaborations in High-Energy Physics, by Joan Warnow-Blewett, Lynn Maloney, and Roxanne Nilan. Report No. 3: Catalog of Selected Historical Materials, by Bridget Sisk, Lynn Maloney, and Joan Warnow-Blewett. Report No. 4: Historical Findings on Collaborations in High-Energy Physics, by Joel Genuth, Peter Galison, John Krige, Frederik Nebeker, and Lynn Maloney

RETURN to History Center HomePage or RETURN to the Publications List