Editor’s Note: This article is part of an ongoing series: Against Technological Relinquishment. Click here to read Part One. Part Three will be published on 09/28/13.
[easyazon-image align=”left” asin=”B00A17IAO0″ locale=”us” height=”115″ src=”http://ecx.images-amazon.com/images/I/417mxjzpiNL._SL160_.jpg” width=”160″]The very beliefs that Neo-Luddism share with Techno-Progressivism and Transhumanism constitute one of the best reasons for arguing that their specific approach – outright relinquishment – is an untenable one. They seek to point out the massively transformative potential of technology, and then use this as an excuse to mitigate their dangers and ameliorate their potential downfalls. We take their approach, pat them on the back (not too heartily, of course) for their starting point, and then flip the course around. We seek to point out the massively transformative potential of technology, but instead of arguing for their relinquishment (using such transformative potentialities as justification), we argue that those same transformative potentialities actually increase our potential to successfully shape their outcome and mitigate their potentially problematizing aspects!
This appears to be a distinct and heretofore-marginalized, if not wholly-unacknowledged, position on the impact of high technology. The predominant opinion is that our potential to shape technology into forms that embody our values (e.g. ethicacy, safety) will decrease as we move into the future and as high technology (1) becomes more and more transformative and (2) develops at an increasingly accelerated rate. By contrast, this sentiment holds that the very same technologies that constitute the source of increasing unpredictability and increasing variability can be used to increase our own ability to shape the ultimate embodiments and consequences of such technologies. We forget that the very same technological infrastructures that have the potential to bring about Singularitarian Strong AI (i.e. computers), the epitome of unpredictability, constitutes the very same technological infrastructure that we have since the computer’s inception used to track trends and to evaluate and extrapolate statistical correlations. We forget that the radically-disconcerting potential for existential risk constituted by the occurrence of an intelligence explosion al a I.J. Good can itself be mitigated by implementing a maximally-distributed intelligence-explosion, in which everyone is given the opportunity to amplify their intelligence at an equal rate so as to prevent the accumulation of too much power (i.e. capability to affect change in the world) in one mind.
We forget, in short, that it is because high technology has such transformative potentials that our ability to utilize it so as to increase our own ability to shape it is even possible.
What are the chances that as soon as it becomes possible to use technology in massively immoral ways, we also gain the ability to shape and affect the parameters of our own morality? That as soon as the potential to use technology in stupendously stupid ways (i.e. without consideration of consequences) we also gain the potential to amplify and augment our own intelligence?
[easyazon-image align=”left” asin=”8874394969″ locale=”us” height=”160″ src=”http://ecx.images-amazon.com/images/I/51r8JSdKIoL._SL160_.jpg” width=”129″]What are the chances that as soon as technology seems to be building upon itself in an unending upward avalanche of momentous momentum, we also gain – through the use of those very same technologies – the ability to better forecast cascading causes and effects into the postmost outpost and to better track trends into the forward-flitting future? I’m not saying that this is inevitable or ipso facto the case – only that it is a conceivable notion, and possible to a much greater extent than has been heretofore realized I think. A closed circle can seem like just that, until adding a vertical dimension reveals that it was an upward spiral all along. We’ve turned upon ourselves to find (or realize) ourselves at least once before, when meat went meta and matter turned upon itself to make mind. Perhaps this was but echoes through time of that final feedback for forward freedom we stand to face, upright and with eyes sun-undaunted, in a future so near that it might as well be here (or so near that we’d better start acting as though it were), where the fat of fate is now kindled anew to light our own spindled fires aspiring ever higher, into parts and selves wholly unknown and holier for it.
I think that dichotomies like the techno-optimist vs. techno-pessimist distinction make it easy for those relatively immersed in the issues at-hand to assume that techno-optimists wish to realize the beneficial potentials of high technology without regard for their unbeneficial potentials and that techno-pessimists wish to negate the dangerous potentials of high technology without much care for their transformatively-beneficial potentials. Such two-bit distinctions lack the praxis of perspective and are more misleading than they are illuminating.
Indeed, techno-optimism is to some extent a distinction that confuses more than it clarifies. Does it denote the sentiment that technology is biased to being beneficial on the whole rather than bad, or does it denote the sentiment that technology’s good potentials are not necessarily inevitable but can nonetheless still be fostered with proper foresight and deliberation? I know very few people that would endorse the former claim, and many that endorse the latter, which I associate more with Techno-Progressivism than with Techno-Optimism.
[easyazon-image align=”left” asin=”B00AIIMSTU” locale=”us” height=”160″ src=”http://ecx.images-amazon.com/images/I/51%2BnAniOjgL._SL160_.jpg” width=”160″]A Techno-Progressive is not the same thing as a Techno-Optimist, if we accept the first characterization of “techno-optimism”. I wouldn’t endorse the claim that all technologies are freedom-expanding ipso facto. To do so would be to forget or ignore the moral ambiguity of technology – the fact that, generally speaking, most technologies can be used to foster good and bad, creation and destruction, expanding autonomy and constricting autonomy. I don’t think it’s a hard rule (i.e. some technologies are more biased towards destruction rather than creation or systemically embody a certain end-purpose or ideological bias, like guns; guns can be used to free or to disenfranchise, yes, but in the end they’re for killing people) but moral ambiguity seems generally applicable to most new technologies until proven otherwise. In other words morally non-ambiguous technologies appear to be the statistical minority. I do not think all technologies are unambiguously good, but I do think that whether their beneficial or destructive potentialities are fostered depends on us and us alone, both in terms of our use of such technologies as well as in terms of our efforts to shape the ultimate embodiments of emerging, converging, disruptive and transformative (e.g. NBIC) technologies through deliberative discussion, and to some extent advocacy and awareness-raising.
In the end, the moral ambiguity of technology (i.e. that it can foster good or bad) is a virtue, not a vice. It merely represents the transformative, open-ended upwardness of technology. If it were unambiguously and inevitable one way or the other then we couldn’t do much in the way of shaping it. If its affects on society and its relationship to humanity were concertedly set in stone, even in the positive direction, then our ability to determine its extents, affects and embodiments would be deterred, not increased! The slipperiness of technology should not be agonized but exalted. Progress is not a thing, it is us! Progress cannot fail; only we can fail progress.
Image Source: Flickr
Love our content? Join the Serious Wonder Community. It’s free, and we have lots of incentives for readers and contributors!