Original Article Science, Technology, & Human Values 1-23 The Algorithms ª The Author(s) 2021 of Mindfulness Article reuse guidelines:sagepub.com/journals-permissions DOI: 10.1177/01622439211025632 journals.sagepub.com/home/sth Johannes Bruder1,2 Abstract This paper analyzes notions and models of optimized cognition emerging at the intersections of psychology, neuroscience, and computing. What I somewhat polemically call the algorithms of mindfulness describes an ideal that determines algorithmic techniques of the self, geared at emotional resilience and creative cognition. A reframing of rest, exemplified in cor- porate mindfulness programs and the design of experimental artificial neural networks sits at the heart of this process. Mindfulness trainings provide cues as to this reframing, for they detail each in their own way how intermittent periods of rest are to be recruited to augment our cognitive capacities and combat the effects of stress and information overload. They typically rely on and co-opt neuroscience knowledge about what the brains of North Americans and Europeans do when we rest. Current designs for artificial neural networks draw on the same neuroscience research and incorporate coarse principles of cognition in brains to make machine learning systems more resilient and creative. These algorithmic techniques are primarily conceived to prevent psychopathologies where stress is 1Institute of Experimental Design and Media Cultures/Critical Media Lab, FHNW Academy of Art and Design, Basel, Switzerland 2Milieux – Institute for Arts, Culture, Technology, Concordia University, Montreal, Quebec, Canada Corresponding Author: Johannes Bruder, Institute of Experimental Design and Media Cultures/Critical Media Lab, FHNW Academy of Art and Design, Freilager-Platz 1, CH-4002 Basel, Switzerland. Email: johannes.bruder@fhnw.ch 2 Science, Technology, & Human Values XX(X) considered the driving force of success. Against this backdrop, I ask how machine learning systems could be employed to unsettle the concept of pathological cognition itself. Keywords attention, rest, sleep, information overload, cognitive neuroscience, artificial intelligence Introduction In their best-selling book Peak Performance, science writers Brad Stulberg and Steve Magness (2017b) refer to a friend who rarely replies to texts, for his employer has “nailed the recipe for stress” (p. 76). When Stulberg and Magness wrote their book, Adam (name changed) was an engineer at Goo- gle’s self-driving car project and his attention was ever absorbed by projects “where struggle and productive failure aren’t consequences of the work, but rather the driving forces behind it” (p. 76). Yet, according to the authors, Google has meanwhile understood that keeping their employees busy is only half the battle. “Without rest,” Stulberg and Magness write, “Google wouldn’t end up with innovation. Instead, it’d end up with a workforce that is broken down and burnt out” (p. 76). Books like Peak Performance and Google’s Search Inside Yourself (SIY) program described within exemplify a movement toward technolo- gical fixes for working at or over capacity in North American and European societies.1 Corporate mindfulness trainings and meditation apps, for instance, promise to help prevent attentional lapses and detail each in their own way how periods of idleness are to be recruited to combat the effects of stress and information overload. In this context, mindfulness practices are reconceived as techniques of the self that augment cognitive labor. I argue that the rising interest in mindfulness practices is symptomatic of an interest in recruiting rest or the flipside of what we know as attention “to task” for enhanced information processing on behalf of the cognitive worker. In the first part of the paper, I draw on analyses of corporate mindfulness and meditation apps to show how rest is reframed within. This links mind- fulness trainings and apps to research on wakeful rest in cognitive neu- roscience, which I sketch in the second part of the paper. Thanks to investigations into the brain’s so-called resting state starting in the 1990s, the brain of cognitive neuroscience has gradually been reconceived as Bruder 3 industrious and never idle. Said research led to a reframing of the resting state as a mode of information processing where our brains not only recharge their cognitive capacities but deviate from the “narrow if-then highway” (Stulberg and Magness 2017b, 88) Google’s artificial intelligence division DeepMind, among others, has taken up these ideas and adopted coarse principles of the resting brain to improve the capabilities of their machine learning systems. In the final section of the paper, I elaborate on this algorithmic implementation of rest to show how rest is therein further removed from idleness and reconceived as a mechanism that supposedly augments information processing and keeps the effects of information overload at bay. In contrast to mindfulness trainings and apps, Google DeepMind’s developments do not (yet) have an immediate effect on the behavior of individuals. Similar to the Facebook patent applications analyzed by Tero Karppi, these algorithmic models are critical media that process our current realities at the level of discourse: they give us “a set of ideas who the user is, what they do, and who they can become . . . ” (Karppi 2020, 49). Ongoing exchanges between computer science and neuroscience hence pass as epistemic experiments, which begin to bleed into psychologies and pedagogies that shape work-lives in North American and European societies. How can we make use of this knowledge in different ways to unsettle the concept of cognitive labor and the user instead of further internalizing the subjectivities that corporate mindfulness and experimental machine learning systems project onto us? Mindfulness, Incorporated In two articles published in 2018, a group of researchers who subscribe to conducting contemplative neuroscience voiced their concerns regarding the ongoing muddling of the ancient Buddhist practice of mindfulness through corporate programs and apps. The authors write that many of the practices that undergird mindfulness “arose in religious and spiritual contexts where the motivations and goals for what could and would be achieved through meditation differed greatly from secular Western notions of health, well- being, and flourishing” (Van Dam et al. 2018b, 68). Even if reliable mea- sures of “flourishing” and “well-being” could be attained, it would remain unclear, from a psychological standpoint, whether the practices conceived as mindfulness trainings are well-suited to attaining such high-level goals. Theirs is a rebuttal to an ongoing commodification of mindfulness prac- tices, which finds support also in their own fields. For instance, neuroscien- tist Alissa Mrazek and colleagues argue that apps provide “an 4 Science, Technology, & Human Values XX(X) unprecedented opportunity to deliver high-quality training to an increas- ingly internet-connected global audience” (Mrazek et al. 2019, 81). They emphasize the reach and seamlessness of apps in comparison to classic, place-based psychiatric therapy, which is becoming ever more important in workplaces that are primarily or entirely digital and screen-based. In general, the two camps do not disagree about the value of mindfulness practices, yet they are divided over the question whether mindfulness should be reconceived to neatly fit into the packed schedules of North American and European executives and white-collar workers. “With the current use of umbrella terms,” the contemplative neuroscientists write, “a 5-minute meditation exercise from a popular phone application might be treated the same as a 3-month meditation retreat (both labeled as medita- tion) and a self-report questionnaire might be equated with the character- istics of someone who has spent decades practicing a particular type of meditation (both labeled as mindfulness)” (Van Dam et al. 2018a, 38). Contemplative neuroscientists are concerned that mindfulness might turn into a mere technological fix for the problems that creative economies and digital cultures have wrought. In this context, mindfulness trainings and apps sit next to digital detox programs (Beattie and Cassidy 2020; Sutton 2020), conceived to counter the effects of affective bonds introduced through the patents, policies, and business models of digital platforms (Baym, Wagman, and Persaud 2020). Apps promise to alleviate the effects of “how our cognitive capacities are captured and modulated on these platforms at the level of affective flow” (Karppi 2018, 10-11). In contrast to digital detox programs, however, mind- fulness trainings and apps are typically designed to allow for affective bonds to be sustained. The very same devices that first induce digital dis- tractions are employed to support changes in behavior and promise “attention by design” (Jablonsky forthcoming).2 Corporate mindfulness trainings pursue similar objectives and recon- ceive mindfulness as a behavioral technique. For instance, Workfulness, the corporate well-being program of Scandinavian telecommunication com- pany Telenor, has been developed for companies that seek to create a healthy digital working environment by reducing digital distractions in the workplace: it “aims at refocusing the employees’ attention to more con- structive behavior” (Guyard and Kaun 2018, 543-44). Central to Google’s mindfulness strategy—which is prominently featured in Stulberg and Mag- ness’s book—is their own SIY program. SIY supposedly supports employ- ees in sustaining peak performance by strengthening their “ability to stay present and be aware of what’s happening as it’s happening” (Search Inside Bruder 5 Yourself Leadership Institute 2019). Ruchika Sikri, Google’s Well-being Learning Strategy Lead, explains these tactics by comparing the mind to a snow globe: We’re constantly shaking it with information overload, distractions and task switching. This results in reduced clarity of our priorities and a lack of focus. By practicing a brief meditation (as short as five minutes!)—we can let the “snow’’ settle and see things more clearly and vividly. Clarity of mind can help us prioritize what’s important, solve problems better, figure out new strategies or uncover issues we may have ignored. (Parcerisa 2019) Google’s SIY, Telenor’s Workfulness, and other corporate programs position mindfulness as an intermittent mode of rest, which departs from traditional understandings of rest as the cessation of labor or nonactivity. They emphasize intellectual flexibility and emotional resilience in work- places, digital and analog (Cook 2016; Ferguson 2016; Parviainen and Kor- telainen 2019). Instead of disconnecting entirely, cognitive workers are called upon to go off-line and regenerate their cognitive capacities to reconnect to the world. What was once a meditation practice that requires years of training is now often recruited to support “fitness for work” (Hull and Pasquale 2018) and has therefore taken on “feats of athleticism” (Gregg 2018). Janice Maturano, a former Vice President and Deputy General Counsel at the American consumer food manufacturer General Mills and founder of The Institute for Mindful Leadership, states: “Just as we understand that there are innate capacities of our bodies that can be trained to make us more resilient, flexible and stronger, we now know from neuroscience research that there is universal training we can experience that can cultivate and strengthen our mind’s capacities. And, the good news is that it doesn’t require any special equipment or a gym membership” (The Institute for Mindful Leadership 2016). That is, rest is not only recruited to regain focus and attention—it allows switching into modes of thought that complement the fast-paced cognitive labor that companies, such as Google, have branded. In Stulberg and Magness’s (2017a) words, rest “isn’t lazily sloth- ing around; it’s an active process in which physical and psychological growth occurs. To reap the benefits of stress, you need to rest.” Corporate mindfulness projects an industrious subject that is never really idle—a mode of subjectivity that is in fact firmly rooted in cognitive neu- roscience research on the brain’s “resting state” (Callard and Margulies 2010). Stulberg and Magness, for instance, prominently reference the work of neuroscientist Marcus Raichle, who began studying the brain “at rest” in 6 Science, Technology, & Human Values XX(X) the 1990s and has since continued this line of research by investigating the brain’s “default mode” of operation. This change of perspective—from rest to default mode—is based on a “flipping of contrasts” in the psychology laboratory that occurred in the 1990s (Callard and Margulies 2011). Rest, Reframed In 2007, neuroscientists Alexa Morcom and Paul Fletcher (2007) pub- lished an article in the highly influential journal Neuroimage that lamen- ted the ever-growing interest in what the brain does when we rest. Although a resting subject’s brain might well be active, the authors saw no reason why such activity should be conceived as the source of crea- tivity and subjectivity, as some of their fellow neuroscientists would have it. Morcom and Fletcher appeared to be particularly disturbed by the fact that participants were by then increasingly asked to simply rest during experiments. I myself remember many off-the-record conversations in neuroscience labs from roughly ten years ago, where researchers mocked resting state research as mere experimental laziness. Most experiments I witnessed doing fieldwork in cognitive neuroscience labs had been dominated by problems, instructions, or stimuli defined by the experimenter and executed by the volunteer. Any measurements of mental and cognitive activity— whether via electroencephalography, positron-emissions tomography, or functional magnetic resonance imaging (fMRI)—were typically conducted if and when the volunteer was occupied with an experimental task. In fact, experimental psychology and neuroscience had since the late nineteenth century been characterized by “an uncanny proximity between subjective responses to a task delivered in the laboratory and one prescribed on the shop floor” (Morrison et al. 2019, 64). Nevertheless, asking volun- teers to put their brains “at rest” in the fMRI scanner had always been an important element of brain imaging studies, for brain activity at rest was conceived as “control condition” and thus “the flipside of a range of focused, controlled and externally oriented processes: an image in negative of the aware and externally attentive brain” (Alderson-Day and Callard 2016, 12). In the 1990s, neuroscientists developed a vested interest in distinguishing the components of resting state activity, and initially they simply looked the other way. Instead of subtracting the activities of the brain at rest from what happens when the volunteer’s brain is hard at work, they started to search for brain regions that show increased activity during Bruder 7 periods of rest and found a network of brain regions we now know as the “default network.” In this process, what had been considered as mere background noise that tends to obscure the cognitive activity of the brain became the target of analysis—an “an organized, baseline default mode of brain function that is suspended during specific goal-directed behaviors” (Raichle et al. 2001, 676). Crucially, cognitive neuroscientists gave up on the idea that the default mode is bound to extended periods of rest and began to more generally analyze mental processes that had been disregarded by cognitive neuroscientists, for they were considered unrelated to active, cognitive processes, and hard to summon in the laboratory. They developed strategies that would keep volunteers from following the traditional, attentive routines of the psychology lab so that they stay “off task.” The goal of these experimental strategies gradually changed, from iden- tifying brain regions that are active when we rest to creating the conditions where volunteers’ minds could stray. The underlying cognitive activity is now variously referred to as self-generated, task-independent, stimulus- independent, unconstrained, or spontaneous thought. The rising interest in these phenomena and the brain’s default mode was based on the idea that “conscious experience is relatively more dependent on the individual’s concerns, preoccupations and hopes (i.e., self-generated), rather than imme- diate perceptual input (i.e., perceptually generated)” (Callard et al. 2013, 1).3 The described, experimental shifts in brain imaging have since contributed to the idea that brains are, in fact, entirely unrestful and allowed for a default mode phenomenology to emerge. Default Mode Phenomenology In 2010, resting state forerunner Marcus Raichle (Raichle) published a paper on the brain’s “dark energy.” The metaphor latched onto the similar concept of dark energy in physics, which allows one to speak about phe- nomena that cannot be reliably measured or explained, although its effects can be observed. The metaphor was introduced as an auxiliary hypothesis to explain why the expansion of our universe keeps on accelerating; in neu- roscience, it helped explain why the brain remains active when the demands of the environment abate. The dark energy metaphor grasped the growing interest in the contents of self-generated mental activity, which researchers had largely ignored—also since experimenters technically need the help of volunteers to know when it occurs and catch their mind wandering.4 In the process of mind wandering, 8 Science, Technology, & Human Values XX(X) memories are recalled for simulations of the future based on experiences in the past, which is why we sometimes imagine lying on a pristine beach while staring into gray, postindustrial landscapes that whizz by the windows of a commuter train.5 More than fantasy and (day) dreaming, mind wandering has been linked to the process of drifting away and interrupting whatever activity had been carried out before. In psychology, it had been conceptualized as task- unrelated since laboratory practices “have repeatedly assumed that experi- mental subjects must have some task to wander from” (Morrison et al. 2019, 67). The so-called executive failure theory, for instance, postulates that mind wandering amounts to a loss of focus and control over our mental activity (McVay and Kane 2010). Psychologists Jennifer McVay and Michael Kane offered that mind wanderers are characteristically unable to stay on task, unconsciously drift away and fail to return their attention to the outside world, and unable to take notice of their mental absence. What is to be avoided, from this perspective, is getting lost in one’s thoughts, whether these are positively connotated as in daydreaming, nega- tively connotated as in depressive rumination, or entirely erratic as in psy- chotic episodes (Wotruba et al. 2014). Yet, the differentiation between highly valued attentiveness and patho- logical introversion has been complicated throughout the last two decades. The fact that humans are mind wandering up to 50 percent of their waking life alone speaks against the cognitive insignificance or any pathological character of mind wandering per se. Proponents of the “perceptual decoupling” hypothesis suggest that mind wandering is characteristic of a cognitive state where we attend to normally sub- or nonconscious processes that occupy parts of our brain throughout the day and not only when we rest (Baird et al. 2014; Hove et al. 2016). That is, neuroscientists meanwhile believe that mind wandering might be indicative of a subconscious, yet system-critical mode of information processing, which occupies our atten- tion whenever we indulge in our thoughts, but generally benefits our ability to keep fixed and “on task” (Shepherd 2019). Uncontrolled mind wandering is billed as the source of cognitive pathol- ogies such as attention deficit hyperactivity disorder, autism, depressive rumination, schizophrenia, and obsessive thought (Tang, Hölzel, and Pos- ner 2015). If mind wandering is voluntary, however, it is linked to creative thinking, imagining the future, social problem-solving, memory consolida- tion, and a general openness to new experiences (Beaty et al. 2018; Murphy et al. 2018). In this case, mind wandering amounts to a particularly vivid form of “off-line thought” (Smallwood and Schooler 2015) or “off-line Bruder 9 perception” (Fazekas, Nanay, and Pearson 2021). In other words, a phe- nomenological ambiguity sits at the heart of the concept: mind wandering is considered a source of pathology if it cannot be controlled, but it can be beneficial and productive if it is “goal-directed” (Christoff et al. 2016). Kieran Fox, one of the currently most prolific researchers in the field of contemplative neuroscience, explains this phenomenological ambiguity of mind wandering in an interview with ALIUS Bulletin: I don’t think of mind-wandering as a conscious state—I think of these processes as more or less ongoing, below the level of awareness, competing with other inputs and signals in the brain for our attention. We can tune in and pay attention to them, or not, and sometimes the thoughts will be strong enough or emotionally salient enough to grab our attention even when we don’t want them to. I think of the stream of inner thought in a way similar to other perceptual channels; for instance, you are constantly receiving a stream of auditory information, even when you’re asleep, but your brain is very good at blocking out probably 99% of this information as totally irrelevant, and you never become conscious of it . . . . I suspect the brain is constantly generating thoughts, imagery, and so on at a “subthreshold” level as well, and noticing it is more a matter of this content catching our attention and becoming illuminated by our conscious awareness than of entering a particular conscious state where mind-wandering then starts or is allowed to take place. (Fox and Koroma 2018, 4) In general, this reframing of mind wandering as a form of subconscious information processing is a welcomed de-pathologizing gesture. It liberates self-generated mental activity from the stigma that emerged thanks to task- focused experimentation in psychology labs and links it to the concept of system maintenance via the metaphor of a default mode. At the same time, this reframing suggests putting the burden of manag- ing attention on the individual. In the context of trainings and apps, mind- fulness is reconceived as a behavioral technique that regenerates and recalibrates attention. An emphasis is placed on training minds and bodies to gain attentional and affective control under conditions of information overload. That is, rest is co-opted to support working at or over capacity and reframed according to the idea that the subconscious mind never rests. The brief and intermittent periods of rest that mindfulness-based interventions permit offer respite from or sustain attention, yet they also promise to facilitate access to alternative modes of thinking. Once considered an antidote to mind wandering, contemplative neu- roscience now suggests that mindfulness can help “steer people away from 10 Science, Technology, & Human Values XX(X) the negative biases that we see in mental illness, and instead nudge them toward positive, constructive, and creative patterns of thinking” (Fox and Koroma 2018, 11). A remarkable passage in Stulberg and Magness’s (2017b) book Peak Performance captures this transformation of unproductive rest into an “Other” mode of information processing extraordinarily well: Our subconscious mind functions in an entirely different manner than our conscious mind. It breaks from the pattern of linear thinking and works much more randomly, pulling information from parts of our brain that are inacces- sible when we’re consciously working on something. It is in these parts of the brain, in the vast forests bordering the narrow “if-then” highway that our conscious mind runs on, where our creative ideas lie . . . it’s only when we turn off the conscious mind, shifting into a state of rest, that insights from the subconscious mind surface. Drawing on the idea of a default mode, Stulberg and Magness link focus and undivided attention to the execution of mundane tasks and a narrow “if-then” highway. What they carve out—metaphorically—is a space of play that is buried below and which can only be consciously accessed when we actively go off-line and put our brains at rest. In this context, rest is reconceived as a mechanism that supports resilient and creative information processing—in both brains and large-scale technical systems. Algorithmic Modulations of Attention Adam, the restless Google engineer who rarely replies to texts, is only one of many examples presented in Stulberg and Magness’s (2017a) book; and yet he is a very memorable one, since Adam was at that time wholly immersed “in the brains and guts of a car . . . to teach an inanimate object moving at 70 miles per hour to differentiate between a stray plastic bag and a stray deer.” This is to say that Adam and Google’s prototype self-driving car essentially face the same problem: neither Adam nor self-driving cars have the opportunity to escape their informationally dense environments. They are called upon to sustain attention to task while being confronted with an endless stream of information that threatens to overwhelm their cognitive capacities. Without necessarily taking account of ongoing exchanges between cog- nitive neuroscience and machine learning, Peak Performance draws atten- tion to the slippages that characterize current research in both fields. For instance, Google DeepMind speaks of a “virtuous circle” between Bruder 11 neuroscience and AI, “whereby AI researchers use ideas from neuroscience to build new technology, and neuroscientists learn from the behaviour of artificial agents to better interpret biological brains” (Hassabis, Summer- field, and Botvinick 2017). This parallelization of human and machine in cognitive science, comput- ing, and public discourse goes back to mid-twentieth century, North Amer- ican social science, and the concept of information overload (Levine 2017). Nick Levine traces it to the work of American psychologist James Grier Miller and his article “Information Input Overload and Psychopathology,” which was published by the American Journal of Psychiatry in 1960. Cen- tral to Grier Miller’s theory is that living systems have limited channel capacity to process incoming information and that pathological behavior likely results from failures in the communication of internal subsystems. Very much in the tradition of cybernetics, Grier Miller applied his theory to systems from the microscopic to the sociological, from the neuron to the corporation. Grier Miller’s universalist understanding of information overload became characteristic of 1960s and early 1970s complex systems theory that was largely indifferent to the fundamental disparity of human and machine. For instance, social scientist and artificial intelligence forerunner Herbert Simon (1971) observed that information overload “creates a pov- erty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it” (p. 41). Throughout the 1980s and 1990s, the concept leaked into management science and became the dominant element of an emerging public discourse on the dangers that accompany the data deluge as well as the proliferation of gadgets, screens, and user interfaces. The idea of finite cognitive bandwidth is now firmly embedded in both the neurosciences and computing. Neuro-psychologists would frame the attendant problem as a “stability-plasticity dilemma” that haunts artificial and biological systems in a similar way (Mermillod, Bugaiska, and Bonin 2013). A pertinent example of a pathology that derives from the stability- plasticity dilemma is “catastrophic forgetting.” As a concept, catastrophic forgetting is native to the machine learning domain, but it compares to the traumatic memory loss that humans experience under conditions of shock. Catastrophic forgetting occurs when an artificial neural network is trained on different tasks and “forgets” one task in favor of another. For instance, take a network that is trained to play legacy ATARI games such as Space Invaders and Pac Man. The network will start by trying out random strate- gies and consecutively “learn” to master Space Invaders by memorizing the 12 Science, Technology, & Human Values XX(X) strategies that lead to success in this very game. If the network is trained on Pac Man thereafter, it might completely erase and overwrite its knowledge of Space Invaders. The problem of catastrophic forgetting has been approached as an issue of algorithmic attention. In contrast to humans, artificial neural networks are not good at differentiating between useful and superfluous knowledge, which is why their attentional capacities need to be care- fully managed. Some researchers suggest to force the network into paying “hard attention to task” (Serra et al. 2018), others promote machinic variants of synaptic memory consolidation (Kirkpatrick et al. 2017). These seemingly different strategies have in common that they seek to protect knowledge from being accidentally erased. Atten- tion is figured “as a means of guarding against undesirable synaptic changes” (Lindsay 2020, 16). Yet, humans remember and forget primarily when they rest, which is why many machine learning researchers resort to insights from resting state and default mode neuroscience. Wakeful rest and sleep are considered to have a double role in the learning process: if we do not need to pay attention to our environment, our brains supposedly “take out the garbage” and purposefully forget to make space for new knowledge. At the same time, we replay experiences from memory to solve problems creatively and store what is important to long term memory (Langille 2019; Lewis, Knoblich, and Poe 2018). Current designs for artificial neural networks involve mechanisms that reproduce this behavior in very coarse ways—yet, without factoring actual rest into the equation. Researchers discuss mind wandering as a principle of resilient and creative information processing in machine learning (van Vugt 2018) or suggest encoding artificial rest and sleep into their systems (González et al. 2020). In artificial neural networks, rest turns into a mere algorithmic mechanism—after all, artificial neural networks never actu- ally rest. In the Scientific American, physicist/neuroscientist Garrett Kenyon accordingly describes rest in artificial intelligence in similar terms as Peak Performance authors Stulberg and Magness’s take on mindfulness in cor- porate settings: Sleeplike states in neural networks are very different from the mode your PC enters after some set period of inactivity. A conventional computer that has gone to “sleep” is effectively in suspended animation, with all computational activity frozen in time. And the age-old advice from the IT department to try Bruder 13 “turning your computer off and then on again” when a PC gets glitchy is tantamount to exposing your machine to a brief period of brain death. That kind of sleep mode would do nothing to settle an unstable neural network. And power cycling would simply reset the network and undo any prior training, effectively giving the network a severe case of amnesia. In neural networks as well as living creatures, a sleeplike state is not inactivity, but a different kind of activity that is crucial to the proper functioning of neurons. (Kenyon 2020) Such posthumanist rhetorics gradually blur the differences between biolo- gical and artificial systems—with the effect that rest is consolidated as yet another form of cognitive labor. It is this reframing of rest that fueled the hype around brief and inter- mittent mindfulness exercises as a source of psychological resilience and creativity. Corporate mindfulness trainings subsume the Buddhist practice of mindfulness meditation to the logics of psychological resilience and thus reflect “a disturbing utilitarianism—a partial adoption of asceticism that is actually the antithesis of productivity’s insatiable appetite for self- enhancement” (Gregg 2018, 122). Recent experiments in machine learning and artificial intelligence contribute to this reframing of rest as a cognitive technology. Against this backdrop, I would like to conclude by focusing on the question of what it would take to escape these logics of necessity: can machine learning systems be employed to experiment with alternative cog- nitive subjectivities? Conclusion The current prominence of mindfulness trainings and apps suggests that North Americans and Europeans increasingly think about mindfulness, and about their lives more generally, in algorithmic terms. Technologies play an important part in this process: as Ruckenstein and Schüll (2017) observe: apps, trackers, and new, device-based pedagogies “bring machinic agency to bear on human ways of defining, categorizing, and knowing life” (p. 269). At the same time, machine learning researchers experiment with implementing coarse principles of cognition in the human brain in artificial neural networks and thus prepare the ground for selling these as generative models of how humans perceive and learn. Whereas the algorithmic mod- eling of cognitive processes is conceived to augment artificial intelligence, neural networks, or neuromorphic devices supposedly further our under- standing of cognition in the brain. 14 Science, Technology, & Human Values XX(X) In other words, both contemporary neuroscience and neuroscience- inspired machine learning research appear to close in on algorithmic under- standings of cognition in humans and machines. The related—by now rather speculative—transpositions are not always and inherently proble- matic. Yet, in their current form, they invite us to think about cognition primarily as a problem of preventing pathology. This tendency is exacer- bated in neuroscience-inspired, artificial neural networks. They provide working models of cognitive labor under conditions of overload and lend themselves well to experiments with technological fixes for the effects of working at or over capacity. Yet, Adam’s issues with work–life balance and the difficulties that (Google’s) driverless cars have with differentiating between plastic bags and stray deer appear comparable only within an information processing framework that presupposes the inevitability of overload. While these tech- niques and technologies may help alleviate its effects of working at or over capacity, they simultaneously burden the worker, and the worker alone with managing overload—and thus reframe rest as yet another form of labor. Current exchanges between cognitive neuroscience and cognitive comput- ing suggest that “there is no idle time, either for human or non-human actors,” as media historian Markus Krajewski (2018) writes in his book The Server. “A human servant is never physiologically inactive. The same applies to a machine: as soon as it goes idle, it turns to maintenance tasks, attending to its own treatment and care” (p. 346). If we want to think beyond this reframing of rest as cognitive labor, we need to situate and historicize the underlying epistemology. The current interest in determining the algorithms of mindfulness is rooted in a reor- ientation toward nonconscious cognitive processes in North American and European cognitive neuroscience since the 1990s. It gradually drew atten- tion to patterns of infrastructural activity in the brain and thus aligned our understanding of biological cognition with contemporary paradigms of information processing, exemplified in cloud computing (Bruder 2019). Thinking about humans as information processors, however, implies neither that we gear our entire lives toward managing overload that is imposed nor that the search for algorithms of mindfulness needs to turn into an endeavor of relentless optimization. The knowledge that contempo- rary neuroscience produces, and that machine learning research selectively perpetuates, lends itself well to unsettling old dichotomies, such as those between attention and distraction or between task and rest. Rather than resorting to this knowledge only to overcome pathologies that derive from our social and informational environments, it might aid in exposing the Bruder 15 frameworks that naturalize overload and define the inability, or unwilling- ness, to succumb to it as pathological. If, as I suggested earlier, the reframing of mind wandering as subcon- scious information processing is a de-pathologizing gesture, this gesture may also be understood as losing interest in perpetuating the notion of psychopathology more generally. Could machine learning systems support this process? In Cloud Ethics, Amoore (2020) writes that algorithms “cannot be controlled via a limit point at the threshold of madness because the essence of their logic is to generate that threshold, to adapt and to modulate it over time” (p. 110). Machine learning algorithms internalize and exhibit significant aspects of the rationalities that undergird them—they “are never far from their conjoined histories with psychosis, neurosis, trauma, and the imagination of the brain as a system” (p. 114; see also Halpern 2014). At the same time, they generate new forms and models of what is considered normal and abnormal, precisely because they merely internalize some coarse principles and fragmentary sets of data that can only ever be partial. Machine learning systems do not—in a traditional way— understand what they are modeling and therefore potentially and acciden- tally unsettle the very rationalities they were made to internalize. I believe that studying and experimenting with these systems can con- tribute to thinking beyond currently paradigmatic epistemologies of machine learning, and toward diversifying or queering related, in North American and European neuro-psychologies. The flipping of contrasts that created an opening for de-pathologizing mind wandering might be a good model for algorithmically unsettling the idea of pathological cognition. This process demands continuous and recurring engagement with the technical- ities of algorithmic systems and the knowledge practices they implement— “continual, careful, collective, and always partial reinscriptions of a cultural-technical situation in which we all find ourselves” (Philip, Irani, and Dourish 2012, 5). Author’s Note Part of the research for this article was conducted during a fellowship granted by the Institute for Advanced Studies “Media Cultures of Computer Simulation,” Leuphana University Lüneburg. Acknowledgments I am extremely grateful to Rebecca Jablonsky, Nick Seaver, and Tero Karppi for their feedback on earlier versions of this article and would like to thank the 16 Science, Technology, & Human Values XX(X) anonymous reviewers as well as the participants of the panel “Attention,” held at the 4S Annual Meeting in New Orleans, for their very valuable comments and suggestions. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research that undergirds this paper has been partly funded through the SNSF Sinergia Grant “Governing through Design” (grant no. 189933). ORCID iD Johannes Bruder https://orcid.org/0000-0003-1749-8338 Notes 1. I am indebted to Michael Fisch who pointed me toward the significance of working over capacity. In An Anthropology of the Machine, Fisch (2019) dis- cusses the significance of “finessing the interval” for a system working over capacity, such as the Tokyo Commuter Train network. Human brains and com- muter trains make for unlikely bed fellows, but it is this metaphorical paralleli- zation of human and machine that sits at the heart of this article. 2. In this regard, meditation apps fall into a class of technics that include self- trackers that monitor and automatically intervene in the behavior of individuals (Berg 2017; Schüll 2016), apps or gadgets that “cancel” unwanted frequencies through adding white or pink noise (Hagood 2019), or cognitive devices that override corporeal senses and biological functions, as Chia (2019) offers, “through an imagined master code to program the mind” (p. 6). Further, one might want to include pharmacological cognitive enhancement with drugs such as Adderall and Modafinil, which have been particularly prominent among stu- dents and academics (Coveney, Gabe, and Williams 2011; Vargo and Petróczi 2016; Vrecko 2015), and the microdosing of classic psychedelics, which has gained traction among creative workers as well as in the traditional (white) middle class (Johnstad 2018). An interdisciplinary group of researchers from the University of California and the University of Alabama found that users expe- rienced positive effects on sociability and “praised microdosing for its ability to increase productivity and satisfaction at their work, which is often a primary concern among the middle class. They reported being more focused, creative, Bruder 17 and energetic, especially on the days of and after microdosing” (Webb, Copes, and Hendricks 2019, 37). Psychologists Vince Polito and Richard Stevenson (2019), however, observed that the positive effects reported by microdosers are often accompanied by and occlude a tendency toward absorption and neuroticism. 3. Following Callard and colleagues (2013), I will from now on refer to this class of phenomena as self-generated mental activity and only use one of the alternative terms in case I refer to a specific aspect of self-generated mental activity. 4. Experienced mindfulness practitioners appear to be able to “change the relation- ship with the resting state and experience the stream of stimulus-independent mental content in an adaptive way” (Vago and Zeidan 2016, 97). Their capacity to tell when their mind begins to wander provides experimenters with a cue to start monitoring the interactions of large-scale brain systems when mind wander- ing is deliberate and goal-directed. 5. Unconstrained thought has therefore also been compared to film editing, with the hippocampus—a central element of the default mode network—as editor: it directs the cuts that essentially determine how individuals remember specific events and conceive of experience as episodes in time (Ben-Yakov and Henson 2018). The minds of people with hippocampal damage provide a model of where such processes fail. They appear to wander primarily in the present, which means that mind wandering does not occur more or less often than in healthy individ- uals, but it apparently lacks “flexible, episodic, and scene-based” qualities. Instead, mind wandering appears “abstract, semanticized, verbal”—and thus entirely similar to ways of thinking that many of us have been trained to enforce on the job (McCormick et al. 2018). References Alderson-Day, Ben, and Felicity Callard. 2016. “Altered States: Resting State and Default Mode as Psychopathology.” In The Restless Compendium, edited by Felicity Callard, Kimberley Staines, and James Wilkes, 11-17. Cham, Switzer- land: Springer International. doi: 10.1007/978-3-319-45264-7_2. Amoore, Louise. 2020. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham, NC: Duke University Press. Baird, Benjamin, Jonathan Smallwood, Antoine Lutz, and Jonathan W. Schooler. 2014. “The Decoupled Mind: Mind-wandering Disrupts Cortical Phase-locking to Perceptual Events.” Journal of Cognitive Neuroscience 26 (11): 2596-607. doi: 10.1162/jocn_a_00656. Baym, Nancy K., Kelly B. Wagman, and Christopher J. Persaud. 2020. “Mindfully Scrolling: Rethinking Facebook After Time Deactivated.” Social Media þ Soci- ety 6 (2): 1-10. doi: 10.1177/2056305120919105. 18 Science, Technology, & Human Values XX(X) Beattie, Alex, and Elija Cassidy. 2020. “Locative Disconnection: The Use of Location-based Technologies to Make Disconnection Easier, Enforceable and Exclusive.” Convergence 27 (2): 395-413. doi: 10.1177/1354856520956854. Beaty, Roger E., Qunlin Chen, Alexander P. Christensen, Jiang Qiu, Paul J. Silvia, and Daniel L. Schacter. 2018. “Brain Networks of the Imaginative Mind: Dynamic Functional Connectivity of Default and Cognitive Control Networks Relates to Openness to Experience.” Human Brain Mapping 39 (2): 811-21. doi: 10.1002/hbm.23884. Ben-Yakov, Aya, and Richard N. Henson. 2018. “The Hippocampal Film Editor: Sensitivity and Specificity to Event Boundaries in Continuous Experience.” The Journal of Neuroscience 38 (47): 10057-68. doi: 10.1523/JNEUROSCI.0524-18. 2018. Berg, Martin. 2017. “Making Sense with Sensors: Self-tracking and the Temporal- ities of Wellbeing.” Digital Health 3 (January): 1-11. doi: 10.1177/ 2055207617699767. Bruder, Johannes. 2019. Cognitive Code. Post-Anthropocentric Intelligence and the Infrastructural Brain. Kingston: McGill-Queen’s University Press. Callard, Felicity, and Daniel S. Margulies. 2010. “The Industrious Subject: Cogni- tive Neuroscience’s Revaluation of ‘Rest.’” In Cognitive Architecture. From Biopolitics to Noo-politics. Architecture & Mind in the Age of Communication & Information, edited by Deborah Hauptmann and Warren Neidich, 324-45. Rotterdam, the Netherlands: 010 Publishers. Callard, Felicity, and Daniel S. Margulies. 2011. “The Subject at Rest: Novel Conceptualizations of Self and Brain from Cognitive Neuroscience’s Study of the ‘Resting State.’” Subjectivity 4 (3): 227-57. doi: 10.1057/ sub.2011.11. Callard, Felicity, Jonathan Smallwood, Johannes Golchert, and Daniel S. Margulies. 2013. “The Era of the Wandering Mind? Twenty-first Century Research on Self-generated Mental Activity.” Frontiers in Psychology 4 (2013): 891. doi: 10.3389/fpsyg.2013.00891. Chia, Aleena. 2019. “Virtual Lucidity: A Media Archaeology of Dream Hacking Wearables.” Communication þ1 7 (2): Article 6. doi: 10.7275/rvqj-n043. Christoff, Kalina, Zachary C. Irving, Kieran C. R. Fox, R. Nathan Spreng, and Jessica R. Andrews-Hanna. 2016. “Mind-wandering as Spontaneous Thought: A Dynamic Framework.” Nature Reviews Neuroscience 17 (11): 718-31. doi: 10.1038/nrn.2016.113. Cook, Joanna. 2016. “Mindful in Westminster: The Politics of Meditation and the Limits of Neoliberal Critique.” HAU: Journal of Ethnographic Theory 6 (1): 141-61. doi: 10.14318/hau6.1.011. Bruder 19 Coveney, Catherine, Jonathan Gabe, and Simon Williams. 2011. “The Sociology of Cognitive Enhancement: Medicalisation and Beyond.” Health Sociology Review 20 (4): 381-93. doi: 10.5172/hesr.2011.20.4.381. Fazekas, Peter, Bence Nanay, and Joel Pearson. 2021. “Offline Perception: An Introduction.” Philosophical Transactions of the Royal Society B: Biological Sciences 376 (1817): 20190686. doi: 10.1098/rstb.2019.0686. Ferguson, Michaele L. 2016. “Symposium: Mindfulness and Politics: Introduction.” New Political Science 38 (2): 201-5. doi: 10.1080/07393148.2016.1153190. Fisch, Michael. 2019. An Anthropology of the Machine: Tokyo’s Commuter Train Network. Chicago, IL: University of Chicago Press. Fox, Kieran C. R., and Mathieu Koroma. 2018. “Wandering along the Spectrum of Spontaenous Thinking: Dreaming, Meditation, Mind-wandering, and Well- being. An Interview with Kieran Fox.” ALIUS Bulletin 2 (2018): 1-15. Accessed March 23, 2019. https://www.aliusresearch.org/uploads/9/1/6/0/91600416/fo x_-_wandering_along_the_spectrum_of_spontaneous_thinking.pdf. González, Oscar C., Yury Sokolov, Giri P. Krishnan, Jean Erik Delanois, and Maxim Bazhenov. 2020. “Can Sleep Protect Memories from Catastrophic For- getting?” ELife 9 (August): e51005. doi: 10.7554/eLife.51005. Gregg, Melissa. 2018. Counterproductive: Time Management in the Knowledge Economy. Durham, NC: Duke University Press. Guyard, Carina, and Anne Kaun. 2018. “Workfulness: Governing the Disobedient Brain.” Journal of Cultural Economy 11 (6): 535-48. doi: 10.1080/17530350. 2018.1481877. Hagood, Mack. 2019. Hush: Media and Sonic Self-control. Durham, NC: Duke University Press. Halpern, Orit. 2014. Beautiful Data: A History of Vision and Reason since 1945. Durham, NC: Duke University Press. Hassabis, Demis, Christopher Summerfield, and Matthew Botvinick. 2017. “AI and Neuroscience: A Virtuous Circle.” DeepMind Blog, August 2. Accessed Febru- ary 17, 2021. https://deepmind.com/blog/article/ai-and-neuroscience-virtuous-ci rcle. Hove, Michael J., Johannes Stelzer, Till Nierhaus, Sabrina D. Thiel, Christopher Gundlach, Daniel S. Margulies, Koene R. A. Van Dijk, Robert Turner, Peter E. Keller, and Björn Merker. 2016. “Brain Network Reconfiguration and Perceptual Decoupling During an Absorptive State of Consciousness.” Cerebral Cortex 26 (7): 3116-24. doi: 10.1093/cercor/bhv137. Hull, Gordon, and Frank Pasquale. 2018. “Toward a Critical Theory of Corporate Wellness.” BioSocieties 13 (1): 190-212. doi: 10.1057/s41292-017-0064-1. The Institute for Mindful Leadership. 2016. “Research.” Accessed January 24, 2021. https://instituteformindfulleadership.org/research/. 20 Science, Technology, & Human Values XX(X) Jablonsky, Rebecca. Forthcoming. “Attention by Design.” Science, Technology, & Human Values. Johnstad, Petter Grahl. 2018. “Powerful Substances in Tiny Amounts: An Interview Study of Psychedelic Microdosing.” Nordic Studies on Alcohol and Drugs 35 (1): 39-51. doi: 10.1177/1455072517753339. Karppi, Tero. 2018. Disconnect: Facebook’s Affective Bonds. Minneapolis, MN: University of Minnesota Press. Karppi, Tero. 2020. “Socioeconomic Status Update: On Whitehead, Facebook, and Targeting a High-end Smartphone Advertisement.” Parallax 26 (1): 48-64. doi: 10.1080/13534645.2019.1685779. Kenyon, Garrett. 2020. “Lack of Sleep Could Be a Problem for AIs.” Scientific American, December 5. Accessed May 20, 2021. https://www.scientificamerica n.com/article/lack-of-sleep-could-be-a-problem-for-ais/. Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, et al. 2017. “Overcoming Cata- strophic Forgetting in Neural Networks.” Proceedings of the National Academy of Sciences 114 (13): 3521-26. doi: 10.1073/pnas.1611835114. Krajewski, Markus. 2018. The Server: A Media History from the Present to the Baroque. Translated by Ilinca Iurascu. New Haven, CT: Yale University Press. Langille, Jesse J. 2019. “Remembering to Forget: A Dual Role for Sleep Oscilla- tions in Memory Consolidation and Forgetting.” Frontiers in Cellular Neu- roscience 13 (March): 71. doi: 10.3389/fncel.2019.00071. Levine, Nick. 2017. “The Nature of the Glut: Information Overload in Postwar America.” History of the Human Sciences 30 (1): 32-49. doi: 10.1177/ 0952695116686016. Lewis, Penelope A., Günther Knoblich, and Gina Poe. 2018. “How Memory Replay in Sleep Boosts Creative Problem-solving.” Trends in Cognitive Sciences 22 (6): 491-503. doi: 10.1016/j.tics.2018.03.009. Lindsay, Grace W. 2020. “Attention in Psychology, Neuroscience, and Machine Learning.” Frontiers in Computational Neuroscience 14 (April): 29. doi: 10. 3389/fncom.2020.00029. McCormick, Cornelia, Clive R. Rosenthal, Thomas D. Miller, and Eleanor A. Maguire. 2018. “Mind-wandering in People with Hippocampal Damage.” The Journal of Neuroscience 38 (11): 2745-54. doi: 10.1523/JNEUROSCI.1812-17. 2018. McVay, Jennifer C., and Michael J. Kane. 2010. “Does Mind Wandering Reflect Executive Function or Executive Failure? Comment on Smallwood and Schooler (2006) and Watkins (2008).” Psychological Bulletin 136 (2): 188-207. doi: 10.1037/a0018298. Bruder 21 Mermillod, Martial, Aurélia Bugaiska, and Patrick Bonin. 2013. “The Stability- plasticity Dilemma: Investigating the Continuum from Catastrophic Forget- ting to Age-limited Learning Effects.” Frontiers in Psychology 4 (2013): 504. doi: 10.3389/fpsyg.2013.00504. Morcom, Alexa M., and Paul C. Fletcher. 2007. “Does the Brain Have a Base- line? Why We Should Be Resisting a Rest.” NeuroImage 37 (4): 1073-82. doi: 10.1016/j.neuroimage.2006.09.013. Morrison, Hazel, Shannon McBriar, Hilary Powell, Jesse Proudfoot, Steven Stanley, Des Fitzgerald, and Felicity Callard. 2019. “What Is a Psychological Task? The Operational Pliability of ‘Task’ in Psychological Laboratory Experimentation.” Engaging Science, Technology, and Society 5 (March): 61. doi: 10.17351/ ests2019.274. Mrazek, Alissa J., Michael D. Mrazek, Casey M. Cherolini, Jonathan N. Cloughesy, David J. Cynman, Lefeba J. Gougis, Alex P. Landry, Jordan V. Reese, and Jonathan W. Schooler. 2019. “The Future of Mindfulness Training Is Digital, and the Future Is Now.” Current Opinion in Psychology 28 (August): 81-86. doi: 10.1016/j.copsyc.2018.11.012. Murphy, Charlotte, Elizabeth Jefferies, Shirley-Ann Rueschemeyer, Mladen Sormaz, Hao-ting Wang, Daniel S. Margulies, and Jonathan Smallwood. 2018. “Distant from Input: Evidence of Regions within the Default Mode Network Supporting Perceptually-decoupled and Conceptually-guided Cog- nition.” NeuroImage 171 (May): 393-401. doi: 10.1016/j.neuroimage.2018. 01.017. Parcerisa, Christin. 2019. “Can Mindfulness Actually Help You Work Smarter?” Google: The Keyword, October 24. Accessed April 21, 2020. https://www.blog. google/inside-google/working-google/mindfulness-at-work/. Parviainen, Jaana, and Ilmari Kortelainen. 2019. “Becoming Fully Present in Your Body: Analysing Mindfulness as an Affective Investment in Tech Culture.” Somatechnics 9 (2–3): 353-75. doi: 10.3366/soma.2019.0288. Philip, Kavita, Lilly Irani, and Paul Dourish. 2012. “Postcolonial Computing: A Tac- tical Survey.” Science, Technology, & Human Values 37 (1): 3-29. doi: 10.1177/ 0162243910389594. Polito, Vince, and Richard J. Stevenson. 2019. “A Systematic Study of Microdosing Psychedelics.” PLoS One 14 (2): e0211023. doi: 10.1371/journal.pone.0211023. Raichle, Marcus E. 2010. “The Brain’s Dark Energy.” Scientific American 302 (3): 44-49. doi: 10.1038/scientificamerican0310-44. Raichle, Marcus E., Anne M. MacLeod, Abraham Z. Snyder, William J. Powers, Debra A. Gusnard, and Gordon L. Shulman. 2001. “A Default Mode of Brain Function.” Proceedings of the National Academy of Sciences 98 (2): 676-82. doi: 10.1073/pnas.98.2.676. 22 Science, Technology, & Human Values XX(X) Ruckenstein, Minna, and Natasha Dow Schüll. 2017. “The Datafication of Health.” Annual Review of Anthropology 46 (1): 261-78. doi: 10.1146/annurev-anthro- 102116-041244. Schüll, Natasha Dow. 2016. “Data for Life: Wearable Technology and the Design of Self-care.” BioSocieties 11 (3): 317-33. doi: 10.1057/biosoc.2015.47. Search Inside Yourself Leadership Institute. 2019. “Search Inside Yourself Program Impact Report 2019.” Accessed January 12, 2020. https://siyli.org/downloads/ Program-Impact-Report.pdf. Serra, Joan, Didac Suris, Marius Miron, and Alexandros Karatzoglou. 2018. “Overcoming Catastrophic Forgetting with Hard Attention to the Task.” In Pro- ceedings of the 35th International Conference on Machine Learning, 4548-57. PMLR. Accessed January 11, 2020. http://proceedings.mlr.press/v80/serra18a. html. Shepherd, Joshua. 2019. “Why Does the Mind Wander?” Neuroscience of Con- sciousness 2019 (1): niz014. doi: 10.1093/nc/niz014. Simon, Herbert A. 1971. “Designing Organizations for an Information-rich World.” In Computers, Communication, and the Public Interest, edited by Martin Green- berger, 37-72. Baltimore, MD: Johns Hopkins University Press. Smallwood, Jonathan, and Jonathan W. Schooler. 2015. “The Science of Mind Wandering: Empirically Navigating the Stream of Consciousness.” Annual Review of Psychology 66 (1): 487-518. doi: 10.1146/annurev-psych-010814- 015331. Stulberg, Brad, and Steve Magness. 2017a. “How Googler’s Avoid Burnout (and Secretly Boost Creativity).” WIRED, November 6. Accessed January 14, 2020. https://www.wired.com/story/googlers-avoid-burnout-secretly-boost-creativity/. Stulberg, Brad, and Steve Magness. 2017b. Peak Performance: Elevate Your Game, Avoid Burnout, and Thrive with the New Science of Success. New York: Rodale. Sutton, Theodora. 2020. “Digital Harm and Addiction: An Anthropological View.” Anthropology Today 36 (1): 17-22. doi: 10.1111/1467-8322.12553. Tang, Yi-Yuan, Britta K. Hölzel, and Michael I. Posner. 2015. “The Neu- roscience of Mindfulness Meditation.” Nature Reviews Neuroscience 16 (4): 213-25. doi: 10.1038/nrn3916. Vago, David R., and Fadel Zeidan. 2016. “The Brain on Silent: Mind Wandering, Mindful Awareness, and States of Mental Tranquility: The Brain on Silent.” Annals of the New York Academy of Sciences 1373 (1): 96-113. doi: 10.1111/ nyas.13171. Van Dam, Nicholas T., Marieke K. van Vugt, David R. Vago, Laura Schmalzl, Clifford D. Saron, Andrew Olendzki, Ted Meissner, Sara W. Lazar, Catherine E. Kerr, et al. 2018a. “Mind the Hype: A Critical Evaluation and Prescriptive Bruder 23 Agenda for Research on Mindfulness and Meditation.” Perspectives on Psycho- logical Science 13 (1): 36-61. doi: 10.1177/1745691617709589. Van Dam, Nicholas T., Marieke K. van Vugt, David R. Vago, Laura Schmalzl, Clifford D. Saron, Andrew Olendzki, Ted Meissner, Sara W. Lazar, Jolie Gorchov, et al. 2018b. “Reiterated Concerns and Further Challenges for Mind- fulness and Meditation Research: A Reply to Davidson and Dahl.” Perspectives on Psychological Science 13 (1): 66-69. doi: 10.1177/1745691617727529. van Vugt, Marieke. 2018. “Mind Wandering Is Crucial for Cognitive Computing and Can Help with Long-term Adaptation.” Paper submitted to Cognitive Com- puting Conference, December 18-20, Hanover, Germany. Vargo, Elisabeth J., and Andrea Petróczi. 2016. “‘It Was Me on a Good Day’: Exploring the Smart Drug Use Phenomenon in England.” Frontiers in Psychol- ogy 7 (May): Article 779. doi: 10.3389/fpsyg.2016.00779. Vrecko, Scott. 2015. “Everyday Drug Diversions: A Qualitative Study of the Illicit Exchange and Non-medical Use of Prescription Stimulants on a University Campus.” Social Science & Medicine 131 (April): 297-304. doi: 10.1016/j.socs- cimed.2014.10.016. Webb, Megan, Heith Copes, and Peter S. Hendricks. 2019. “Narrative Identity, Rationality, and Microdosing Classic Psychedelics.” International Journal of Drug Policy 70 (August): 33-39. doi: 10.1016/j.drugpo.2019.04.013. Wotruba, Diana, Lars Michels, Roman Buechler, Siblle Metzler, Anastasia Theo- doridou, Miriam Gerstenberg, Susanne Walitza, Spyros Kollias, Wulf Rössler, and Karsten Heekeren. 2014. “Aberrant Coupling Within and Across the Default Mode, Task-positive, and Salience Network in Subjects at Risk for Psychosis.” Schizophrenia Bulletin 40 (5): 1095-104. doi: 10.1093/schbul/sbt161. Author Biography Johannes Bruder studies the history and present of decision-making systems and how these encode psychological categories, sociological models, artistic practices, and speculative designs. His first book Cognitive Code. Post-anthropocentric Intel- ligence and the Infrastructural Brain (MQUP, 2019) is based on fieldwork in neuroscience laboratories and provides deep insights into the bio-politics of con- temporary machine learning. He is the interim head of the Institute of Experimental Design and Media Cultures (IXDM), FHNW Academy of Art and Design and affiliated with Milieux - Institute for Art, Culture, Technology at Concordia Uni- versity, Montreal, Quebec, Canada.