Value Beyond Instrumentalization



“What I propose, therefore, is very simple: it is nothing more than to think what we are doing.”
Hannah Arendt


Silicon Valley was founded on an ethic of “move fast and break things” and a boundless techno-solutionism. Although this has been somewhat revised in the techlash post-2016, moving fast is still the imperative, and arguments still abound that technology is value-neutral and that everything and anything profitable should be built. The speed of ethical and regulatory development continues to be outpaced by technological development, incurring massive ‘ethical debt’.

This glorification of action over thought is reflected in Silicon Valley's culture and canonical texts. Books like Zero to One or essays like It's Time to Build emphasize the urgent need to create, innovate, hack, and iterate on products with vast social consequences, rather than the responsibility of technologists to pause, reflect, and introspect before doing so. To slow down is to end up default dead; the rhythm and pace of how technology is supposed to be built does not allow for consideration of social consequences. The pattern is repeated at an individual level: ethical thought and decision-making is believed to only be possible after one achieves financial freedom.

This tendency to instrumentalize, or to treat something as a means or resource for achieving some end goal, shows up in the personal lives of many technologists. Many types of “fun” are made telic. Books must be read with a note-taking system; exercise must be quantified and tracked; friendships are managed in personal CRMs; casual hangouts and dating apps are covert recruitment missions. This optimistic instrumentalization attitude is incredibly useful. Technologists decide on goals, then effectively use resources to achieve those goals. The most visible success cases are massive companies worth billions of dollars.

However, it also means that Silicon Valley, for all its counterculture-inspired talk about radically reimagining the future, has not shored up the strong ethical grounds requisite for principled construction.

Policymakers and theorists can help here, but given legal legwork is incredibly slow and that technologists have a unique capacity to understand their own work, technologists have a moral responsibility to this debt that they cannot offload to third parties. How do we ensure that we build thoughtfully and responsibly without incurring further ethical debt, and begin to make good on the standing debt besides?


“All that was good becomes data. All that was beautiful is now efficient.”
Jacques Ellul, in The Technological Society


In 1954, long before capital-T technology, German philosopher Martin Heidegger argued that this instrumentalization was a natural consequence of organizing life under technology. He claimed that under a technological worldview, the world, including oneself, ceases to have presence or value in its own right, and is regarded instead as a ‘standing-reserve’, that is, merely a resource to be extracted from.

For ease, let’s call this the instrumentalizing worldview, although there are many different components to it that are all interdependent: the desire to optimize for easily quantifiable metrics, which are usually in some form a path or proxy to profit, the tendency to regard everything as a resource, and an assumption that values are comparable and aggregatable.

The instrumentalizing worldview can lead to sweeping hard questions under the rug— for example, the consideration of non-quantifiable externalities. It’s much easier to compare the month over month increase in number of interactions between two users of a platform than to deeply investigate the health of their relationship when compressed through such a system, or of the larger system around them. A technologist can optimize easily for the UX of completing a grocery purchase on Instacart and measure NPS scores before and after a code or product change; it’s much more complicated to reason about how making Instacart easier to use impacts the health of the community.

Besides externalities, there are all sorts of moral or political goals one might find valuable, but one is unable to assign an economic value to, and therefore are not accounted for at all under such a worldview.

This worldview is additionally limited by acute technosocial opacity,1 where the effects of any given technology are hard to foresee or calculate. Too often, one’s response to this opacity is to shorten the time period we can pay attention to— think of how during COVID-19, many were unable to make plans more than a few days ahead of time.  

This worldview can lead to mental health problems, if it is taken to the extreme, and you begin to treat yourself and other human beings as means to an end.

This worldview allows for environmental destruction, where public goods like oxygenating trees and bodies of freshwater are regarded as resources, and then financialized, purchased, and ‘used up’.2

This worldview is “the water”; it’s embedded in the very metaphors that technologists use to describe themselves, their relationships to others, and the world.


* * *


Technologists must move past accepting this worldview by default, independently consider and develop their own moral principles and worldview, and bring that back into their work.

Trying on different moral systems is a useful step in achieving post-conventional morality. For example, what if we applied deontology to technologist work? Kant’s first formulation of the categorical imperative says that you should only act in a certain way if it is possible and desirable for everyone in the world to act in that way.

When I was raising a round for a company I was working on, I was asked a common question, “What is the billion-dollar version of your idea? How can you get everyone using this?” As I painted the picture, a voice whispered in the back of my mind. What, indeed, would the world look like if I did succeed on a massive scale? I realized that Kant’s imperative is in particular applicable for technologists. Those who enter the field usually have world-scale aims. We proclaim boldly that we hope to ‘change the world’; we praise the highly agentic. Founders are constantly asked to envision the path to a world where they’re massively successful. I propose, then, a revision of Kant’s imperative for technologists: do not work on something if you yourself do not want to live in the world where you are massively successful.3 With this reframe, we turn the question of unicorn status on its head, and interrogate the ends instead of the means.

To make this thought exercise concrete: what happens in a world where everyone uses Instacart? That world is one in which grocery shopping is entirely instrumentalized. Gone are the local grocery stores and farmer’s markets, relegated to antiquated relics in uber-wealthy zip codes. It would be impossible for the average person to take pleasure in grocery shopping. In order for us to live in a world where you can take pleasure in grocery shopping, you must yourself take pleasure in grocery shopping now, and by doing so, demand that such a world exist. I’m not saying it’s always bad to use Instacart, or even that the act of grocery shopping is intrinsically valuable. Indeed, values shift as technologies develop; before indoor plumbing, people, frequently women, laboured to fetch water from wells, and most of us are glad that that task is no longer in our lives despite it potentially having been enjoyable when it was the only option. But being aware of the technological path carved for you and its consequences is still important, especially if you are helping resource that path with millions of dollars and hours of attention over its alternatives. 

A few additional imperatives I feel confident about: first, your physical body and mental health should not be instrumentalized in service of being useful to a startup mission, or even a life philosophy. Tech has a culture of ‘bringing your whole self to work’, which really means exposing all of yourself as a resource to be extracted from to make the company successful. Technologists correspondingly also frequently construct their selfhoods from work; I’ve seen too many friends, especially founders, fully meld their self-image and vision with their companies and subsequently suffer cognitive dissonance and even depression. For some, like Steve Jobs, this melding was crucial to the charisma required to achieve his goals, but forcing such a melding is dangerous. Resist being context-collapsed to a one-dimensional being.

Some ideas as you continue to explore your value system:
  • Ask yourself: “What do I like about this world? What pains me about this world?” This helps you develop taste for what is beautiful, as well as what is discordant, unjust, ugly, wrong.
  • Be present. To notice anything requires presence. What does your Uber driver’s voice sound like? Are you more numb today than yesterday to the presence of unhoused people on your walk to work?
  • Pay attention to the metaphors you use. What do they connote about how you view the people around you?
  • Explore what answers different ethical traditions and theories of value might give you, and then construct your own answer as to the way you should be and act.
  • Recognize ethical tensions when they arise; you don’t have to push back right away when your boss asks you to scope out a certain feature, but you can note the tension and shelve your reaction to revisit and examine later.
  • Seek to understand your situatedness in broader systems of power, even when your immediate day to day work does not surface ethical tensions to you; reflect on your own locus of control and your sense of agency.

Find a community that is also asking these questions. Americans have largely been taught to solve age-old questions of value and meaning individually, to figure out everything rationally, to decide about information and authority figures piecemeal. Being forced to solve everything anew as an autonomous (read: lonely) rational agent results in increased guilt, overwhelm, and uncertainty.4 You could join a community like Reboot or the Bento Society, explore your local faith communities, or join a community of technologists thinking about these questions like Interact.

The general public must begin to think of technologists as doing deeply human and value-laden work. Technologists intervene in our present realities and forge the future, and in doing so, choose how best to model the world and impress their will upon it. The public must insist that technologists are responsible for thinking about the human implications of their work. This ongoing ethical cultivation should be a core activity in the technologist identity.

To make this reflection possible, the technology ecosystem must maintain non-instrumental spaces (as opposed to, say, a space like YCombinator, where the space is meant to help you reach a specific external outcome under time-bounded pressure) where technologists can play with ideas and think freely, similar to parks and urban forests in a bustling city. Such spaces for paratelic play existed at various points in Silicon Valley’s history: Bell Labs and Xerox Parc and the first generation of hackathons, but have largely been discarded or commercialized. Technologists must have (literal) space for thought.

Technology, as defined in an earlier essay in this series, is a path to some ends. This is a call to be thoughtful about how we choose what paths to construct, how we resource those paths, and what intermediate and terminal ends and value systems those paths lead us to. As a technologist, your thoughtfulness and attention have privileged leverage. Decide carefully what to pay attention to out of an infinity of possible ends to apply your time and resources, and what worlds you wish to bring about. This is a call to build a beautiful and deeply good future. Future generations await with bated breath.


“I am. We are. That is enough. Now we have to start.”
Ernst Bloch

 


Many thanks to Matthew Jordan, Saffron Huang, Tammy Winter, Anna Mitchell, Maran Nelson, Toby Shorin, Jacky Zhao, Jasmine Sun, Julian Shapiro, and Jessica Dai for their thought partnership, lovely conversations, and feedback.




Jasmine Wang is a writer and researcher who sometimes coaches her friends through existential crises. She can be found at @j_asminewang and at jasminew.me.




1 Defined by Shannon Vallor in Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. She argues the opacity results from a number of factors, including accelerating technological change and the increasing inter-relatedness of techno-social systems.

2 Buber asks, contemplating a tree in I and Thou, what becomes possible if we enter into an I-Thou relationship with a tree, rather than thinking only of its attributes and/or utility.

Some more food for thought: for the Epicureans, it was the good life, simple pleasures like having a good conversation with our friends over a well-crafted meal. What are our responsibilities towards ourselves, our loved ones, and the universe at large? Confucian values prioritize the family above all. How do we execute on those responsibilities, while still living our lives gracefully? Traditional Judaism values hard work, but that you should also observe a day of rest for yourself for you to be alone with God.

Refining your values and building your ethical worldview cannot be a solo act of logical deduction and calculation. Ethical traditions are not simply about sets of ideas; they are about groups of people who embody those ideas in practice. Maslow speaks in the appendix of Farther Reaches of Human Nature that he suspects the subculture of juvenile delinquent America developed largely because of lack of strong parental role-modelling of a coherent (even if not necessarily ‘correct’) value system.