Center

      for

   Science in Society

Bryn Mawr College

http://serendipstudio.org/local/scisoc

 

 

 

2002-2003 Weekly Brown Bag Lunch Discussion
"The Culture of Science
"

November 13
Chris Couples, UNIX systems administrator for the sciences:
"The Political Practice of the Sys-Admin"

Summary
Prepared by Anne Dalke
Additions, revisions, extensions are encouraged in the Forum
Participants

Today's topic was "The Political Practice of the Sys-Admin," or what Chris described as "being judge, jury, and executioner in cyberspace: why system administration is much more than info-mechanics. What does a system administrator do? What should she do? How does system administration fit into a liberal arts college environment? Why is your system administrator a political creature?"

Having reviewed recent discussions in the Brown Bag series, Chris proposed a different way of looking at (the question of) politics and political action. At the beginning of last week's discussion, a distinction was drawn between poltical practice and research into and identification of systems of power/domination/marginalization--along with the suggestion that academic research in and of itself didn't quite "cut it" as political practice. So what happens when a job function (which has evolved into its own disciplinary profession) is viewed not simply from a narrow technical frame, but in a broader political context? Chris's hypothesis was that, when we look more closely at what those responsible for the maintenance of our online systems actually do, we'll realize that there is an immediate political dimension to their practice that, as users, we would do well to remain aware of.

Thesis 0. System Administration is not simply info-mechanics.
It's not simply a set of practices, it's also policy-making and policy-enforcement which (based on the discussions here in the past couple of weeks) makes sys-admin an inherently political pursuit. In the vernacular, sys-admins issue "commands" which are unquestionably obeyed by the machine, and which make immediate changes to the state of online environments. If we want to think of politics as being the art of apportioning scarce resources, of making choices as to how things are done, then system administration has a dimension that we don't always appreciate, even as system adminsitrators practice it. It's this dimension that Chris invited us to explore. How are the political practices of sys-admins situated in a larger context? Why does that matter to all of us who spend any amount of time in online spaces? This is where the query about being "judge/jury/executioner" comes in --at least in online spaces.

Thesis 1. Regulation of and surveillance over online spaces is qualitatively different than in offline spaces.
The ability to automate system and network monitoring functions, and to have these functions repeat iteratively, makes surveillance comprehensive and almost perpetual. The ability to monitor online environments allows for a similarly comprehensive system of regulation. Bentham wrote about the panopticon in the 18th century, and Foucault picked up the idea and ran with it in the '70s. In the '80s, Lyotard began to realize the importance of electronic surveillance and the aggregation of electronic data, saying something along the lines of "If you want revolution, then open up all the databases." Under normal circumstances, the electronic eye of system surveillance and enforcement just doesn't blink.

Thesis 2. The ability to design, configure, and implement regulation and surveillance places the system administrator in a unique position to at once create, enforce and punish violators of rules.
A simple example: disk quotas. Chris is the sys-admin on a random UNIX machine where we all have accounts, and he institutes disk quotas, which limit each of us to 100MB of disk storage on his system. Through a couple of configuration files, he writes the rules (in human-readable terms), as well as the penalty -- "fiats and sentences"; the computer reads and carries out these rules. If we (as users) fill up our 100MB, we'll either (depending on the way Chris writes the rules) get a nasty-gram advising us that we are over quota and that we need to delete files to get safely back under quota, or we'll lose the ability to save files at all. In the most extreme case, this could result in our being denied access to the system, to that info-space. A more nuanced example: as a sys-admin, (hopefully in consultation with the users, his "constituents" or "citizens") Chris controls access to system services. The tools at his disposal allow him to grant and refuse access to these resources as granularly as he likes, based on criteria as broad as the network domain from which our connection originates, to something as narrow as our username, or the hardware ID of the machine we're using.

Thesis 3. System Administrators should not practice and exist in a political/ethical vacuum.
...and thankfully, the sys-admins Chris knows don't. But non-sys-admins don't think of the practices of system administration in broad enough context enough of the time. It's in the hybrid zone between online and offline spaces where the interesting discussions happen, and where the important decisions are taken. So: we are back to politics. For most projects, most services, the question to be asked is no longer "Can we do X?", but "Should we do X?" and then, "How should we do X?" (of course Chris realized that he was giving away the store here, but hey...). Sys-admins, as providers and consumers of technology, are way past the point of setting something up or doing something just because of the novelty of being able to do it. They have to begin to approach the creation of online resources, and the manner in which they regulate the use of these resources in a responsible, grown-up manner.

Coda: what does this have to do with Bryn Mawr, and the folks who learn and teach here?
Or more importantly, how do we balance the utility and efficiency of system administration as it's practiced, with concerns about such things as due process, equality, justice, and ethics? Are those who deliberately test and transgress the limits extant in info-spaces somehow patriots? Criminals? Civil libertarians? How do we understand those who compromise or alter our systems? Are they vandals? Hackers? Criminals? Is it desirable to have one person, the sys-admin, responsible for all aspects of making these determinations? Can we imagine a different way to do this? What might a consensus-based administrative model be for online spaces? In what ways can online spaces be more participatory? What sort of model of political existence fits the way in which systems are managed/ruled/administered? What would an alternative way of doing this look like? How do online existences in corporate info-spaces relate to questions of due process, equality, and justice? When we join AOL, we accept a Terms of Service (TOS) agreement. How are these terms drawn up? In whose interest are they tilted? What would happen if a group of AOL subscribers had a digital uprising demanding more rights? When "voting with your feet" has a demonstrable economic equivalency, what happens? Is such an ultimately commoditized political existence (if I don't like being a member/citizen/denizen of this online space, I'll buy admission to another one) a good thing? Should there be spaces created online which are free to anyone, and how can those be regulated?

Discussion followed on the "disembodied" nature of the sys-admin, and of the power he wields because of his invisibility and anonymity. Although Chris insisted that he did not occupy the place of "the management" in larger universities, that his "presence isn't masked," that the issue here is distance, in miles, participants suggested that he does possess specialized knowledge which he hasn't actively given to us; there is a disparity in knowledge and skill which translates into a disparity in privilege. We need the tools he has. Making, implementing and enforcing public decisions, he participates in a distribution of power. What makes it stable? How can it be fixed? How can interests be balanced? There are forces, and structural enforcements, to counteract the seats of power. The structure is normalized, not formalized; most work takes place through informal conversation. The ultimate big stick, the ultimate correction, is the pink slip: Chris can be fired.

Sys-admins want to hear from their constituents what facilitates and hinders their work. It was suggested that the difference between a distributed system and a centralized one, between "state" and "federal" forms of government (which would set policy and effect and adjudicate violations) depends on the audience. Chris's opening comments were loaded with analogies and tropes; a libertarian ethos permeates work online, where theoretically "anything is allowable," as long as it hurts no one. But how to decide who is hurt, and how? Can online spaces work according to offline governance structures? Is this a representative democracy? These are profoundly political issues: in reviewing the Communications Decency Act, the Supreme Court asked (but didn't answer the question of) what kind of public space the internet is. What's private about it? There are tensions between private, closed systems and public interests. Who decides the appropriate analogy for this sort of "park"? How to do governance in a space (like AOL) which people pay to be part of? Consider the structure of virtual communities, transnational communications systems like list servs which circumvent governments. How are these defined as communities? Or are they just a flow of information? Is there a virtual public sphere? The electronic network has changed our notions of public discourse; it offers a different space for communication which, in this age of insecurity, this time of the Patriot Act, is simultaneously more anonymous and more vulnerable to surveillance.

It was suggested that the goal of making computer-based experience increasingly interactive was a misnomer, that the so-called "transformed public space" offered by computers actually promotes greater passivity. The nature of our exchanges has changed: communicating electronically requires a different etiquette, and creates a space that is not as dialogic, not as efficient, as face to face encounters--which most disciplines still value. Despite the expense and time of real-time conferences, despite the increased offerings of on-line virtual conferences, we still persist in our drive for face-to-face validation of knowledge, being in the presence of someone who is doing the experiment. In A Social History Of Truth : Civility And Science In Seventeenth-Century England, Steven Shapin claims that this interactive component is actually the foundational structure of post-humanist science. Rather than champion this new virtual community as a "true democracy," we need to admit that this form of exchange has its own exclusivity, that not everyone can participate. Cyberspace discourse is not precisely disembodied, but it is not warranted by the body, not validated by the presence of the body. The quality is also different: in a "face-off" between two sets of hands on two keyboards, there is no hierarchy present as there might be in the meeting, at a conference, between a neophyte and a master in the field.

But it was also suggested that these efforts to rank all forms of community along a single scale were misguided. The internet has a distinctive character which didn't exist before. It enables us to engage one another with "less baggage," to take more time to think than is possible in face-to-face encounters. Distance learning allows non-speakers to be stars, in a slower, to-them more thoughtful process. It also involves the inevitability of "lower bandwidth"; there is less body knowledge available in online exchanges (as well as, paradoxically, the expression of more imaginative versions of the self than those constrained by the body.) If the motive is egalitarian, this is the most egalitarian system we've seen--although it is still largely text-based, different in speed, not in kind, from earlier modes of communication. In comparison with the difficulty of publishing a volume to be held by the Library of Congress, however, there has never been a public place to which the barriers are so low. (Are they too low? How do the uninitiated access the 'net? Do we need to recommend sites and sources so students can't choose them randomly?)

There are different levels of organized thinking about policies. How useful is the concept of emergent systems here? (At a certain level, an emergent system is an oxymoron; once there is a plan, there is no emergence.) Often a reconfiguration or a denial is perceived as a policy--and policy making can be understood as occurring from the bottom up. This can be a desperate expedient, as when case load drives policy; then a form of triage results, with only the most serious cases being handled. But when the necessity of implementation doesn't drive de facto decision making that easily reifies into policy, upside-down policy making can be effective. An indigenous client population--the community--should set policy; case workers can be helpful resources, because they are engaged in front line activity, and know how things are going down. A balance is needed between bottom up and the administrative, global view. In a distributed formation, trial and error is key, and it has monetary costs, among others. Are there ways of simulating policy? Of course you want to try things out in the sandbox before any wider deployment happens. There is a whole discipline called systems thinking. The movement to nodes on this campus is a kind of mini-experiment which will influence how we move forward.

There was a debate about the degree what Chris called "his system" is everybody's; can it be ours if we do not know how unstable it is? How can the community have a say in setting policy, be educated in the "six different ways" any task might be done, each with different results, in a system we have no knowledge of? How to move from a centralized model of providing services to users to a consensual sharing of the use of a machine? These are questions about equality and access, and invite richer conversations than those now occurring. Of course if this avenue is taken too seriously, nothing will ever get done; there must be a balance between consensus and efficiency. (What does it mean, to "get things done"?) Often there is no plan, but just scurrying around, doing stuff, being responsible for broad brush policies, rather than too detailed work; the sys-admin needs the freedom to play. Chris admitted his discomfort with the literal readings of the trope of "his" system; he was simply claiming the pride of work and time put in to make something work, not the "cult of the demon sys-admin," who is dictatorial. It is appropriate for him to take ownership in the machines he has worked on, and pride in their performance--as opposed to tinkering with someone else's. One might also claim ownership and take enormous pride in a system that allows others--even maximizes their ability--to fiddle. This is a craft-based vision, modular, built up from scratch.

Of course the limitations of software (such as People Soft or Blackboard) often drive policy decisions; we are always heavily constrained by structures, by what we can't tinker with. But we can be aware, and ask. At what point? Where does participation in policy happen? Where can we tinker? What are the political conditions for doing so? Where are the local opportunities to effect change ourselves? What (free) alternatives are there to Blackboard? Why is no profanity allowed in our campus e-mail? Is that a limit set by Eudora? Can we adjust it? At the other edge, there are sys-admins who don't want to work in any system that isn't "turnkey," for whom craft is not uppermost, who find "massaging the system" just too much work.

Our conversation will continue next week, when Kris Tapp, a Keck Postdoctoral Fellow who is teaching in the Department of Mathematics, will lead the discussion. He is assigning reading: "The Ideal Mathematician" from The Mathematical Experience by Davis and Hersh. (Copies are available for pick-up outside the Math Office, 3rd floor Park, Tomomi Kinukawa's office in Thomas, and Anne Dalke's office, English House 205).

"What does it mean to prove a theorem in mathematics? Is there an objective way to decide whether a proof is rigorous? Do we discover or invent mathematics? In what sense do the objects that mathematicians discover (or invent) really exist? These questions have been investigated from within mathematics (for example, Gˆdel's Theorem and axiomatic set theory). Sociologists or philosophers outside of mathematics might also study these questions, highlighting discrepancies between what mathematicians do and what they think they do. " Beginning with these questions and the assigned reading, Kris hopes to spark discussion about the philosophy and culture of mathematics.


Home | Calendar | About | Getting Involved | Groups | Initiatives | Bryn Mawr Home | Serendip Home

Director: Liz McCormack -
emccorma@brynmawr.edu | Faculty Steering Committee | Secretary: Lisa Kolonay
© 1994- , by Center for Science in Society, Bryn Mawr College and Serendip

Last Modified: Wednesday, 02-May-2018 11:57:05 CDT