Join our Mailing List

Posts tagged with ‘Internet’

Out of Sight

tb-383

The Internet delivered on its promise of community for blind people, but accessibility is easy to overlook.

I have been blind since birth. I’m old enough to have completed my early schooling at a time when going to a special school for blind kids was the norm. In New Zealand, where I live, there is only one school for the blind. It was common for children to leave their families when they were five, to spend the majority of the year far from home in a school hostel. Many family relationships were strained as a result. Being exposed to older kids and adults with the same disability as you, however, can supply you with exemplars. It allows the blind to see other blind people being successful in a wide range of careers, raising families and being accepted in their local community. A focal point, such as a school for the blind, helps foster that kind of mentoring.

The Internet has expanded the practical meaning of the word community. New technology platforms aren’t often designed to be accessible to people unlike the designers themselves, but that doesn’t mean they aren’t used by everyone who can. For blind people, the Internet has allowed an international community to flourish where there wasn’t much of one before, allowing people with shared experiences, interests, and challenges to forge a communion. Just as important, it has allowed blind people to participate in society in ways that have often otherwise been foreclosed by prejudice. Twitter has been at the heart of this, helping bring blind people from many countries and all walks of life together. It represents one of the most empowering aspects of the Internet for people with disabilities — its fundamentally textual nature and robust API supporting an ecosystem of innovative accessible apps has made it an equalizer. Behind the keyboard, no one need know you’re blind or have any other disability, unless you choose to let them know.

Read More

Cookie Cutters

NEC SX-3, 1984

By 

Online advertising is a self-regulated industry, which means you tangle with it entirely on its terms

The signature, once the widely accepted seal of approval, seems increasingly quaint and perfunctory in the age of digitally mediated interaction, an outmoded ritual that for many transactions has become superfluous. When you scrawl-sign your name with a plastic stub in digital capture box, you can’t help but wonder whether this vestigial ritual of consent is even necessary.

Online marketers and data brokers don’t seem to think so, however. Many benefit from this relic and its inability to handle today’s complexities. This is best seen when we try to enter into agreements to opt out of online advertising schemes. Instead of agreeing to an opted-out status, we find we have always opted in—even when we accept their consent to not track our online ­activities. We are, it seems, already opted out of opting out of such efforts.

Traditionally, the matter of drawing signatures on documents has served as the lasting reminder of the moment of contractual consent. But this moment, in more and more cases, never comes. For instance, faced with towering stacks of delinquent contracts, mortgage companies resorted to the practice of robo-signing to expedite foreclosure proceedings against homeowners, despite laws that explicitly forbid such practices. President Obama’s legal team had to argue for the constitutionality of his using an autopen to sign legislation into law when he is away from Washington.

Read More

Hate Sinks

By 

The facade of liberal democracy only stays clean by putting young women in hate’s way

Moderators bring silences to the Internet’s city of words. They read everything so that we don’t have to. In sites under moderation, they have filtered everything we see. It is Courtney (one of the many moderators, all given pseudonyms here, that I interviewed as part of my research on ideas of civility and changes in the public sphere), not us, who combs the threads of an Australian broadcaster’s website and social-media pages for the output of users “who will just post the word cunt 50 times for like three hours.” It is Michelle who, daily, reads and deletes the many comments posted to news stories on immigration “calling for asylum seekers to be burned in their boats.” Moderators try to keep their employers in the clear by banishing antagonism that exceeds the anachronistic limits of a deliberative public sphere.

But this means we necessarily don’t see their work. Readers can’t look at what goes on in the content-­management system or the stream of user comments welling up in it. We can never know what has been cut or why. We may assume, but we can never be sure, what was hacked out to leave the telltale scar: “comment deleted by moderator.” On the sleek surfaces of big media sites, there are no signs of the frenetic backstage efforts to staunch the hemorrhaging of their gravitas, or of the pace that Sarah sets when “you’ve got three hours and there’s 1500 comments to get through, and they have to be read and thought about, and you’ve got to check links.”

Read More

SEO & the Disappearing Self

(via)

"On Facebook, our communication is assessed like online advertising — how many click-throughs did it inspire? This prompts us to make what we say on Facebook sound like advertising discourse. And if the way we talk about ourselves is to a large degree who we think we are, this would not bode well for human dignity."

By Rob Horning

For a long time, well past the point of reasonability, I was one of those people who didn’t want a cell phone. I romanticized the idea of disappearing completely as technology left me behind and I could assume a “pure” form of unmediated existence. I professed a fear of being too easily reachable. But more likely I was afraid that carrying a phone around would provide continual proof that in fact no one wanted to reach me, that I had already disappeared and didn’t even know it.

But what I harped on most was my fear of having to ignore certain calls and have the person who called assuming that I was choosing to ignore them. Maybe they would trust that I have a good reason; maybe they would think, like I often do, that I am self-centered jerk. But the plausible and conveniently neutral explanation that I am not home would no longer apply, would no longer be at the ready to mitigate the missed connection.

Inevitably, I thought, this would degrade my moral obligation to reciprocate with friends in order to sustain for myself the idea that they actually are friends. The burden of reciprocation becomes too great: Because we are expected to carry around phones, it’s easy to presume that by default, the conversational channel is always open and that when your call is not taken, it’s as if the person you were trying to talk to had turned their back on you mid-sentence. A new burden emerges: an obligation to explicitly redraw the boundaries of availability. Being “not at home” becomes a state of mind, a choice that brings forward any feelings of self-importance that might otherwise lie dormant.

It would seem natural enough to want to associate availability to presence, but thanks to technology, we can be virtually present yet dispositionally unavailable. This conundrum evokes those scenes in 19th century novels when the card of hopeful drawing-room visitor is sent up, and the servant returns to say that no one is in, even though everyone knows that they are. Etiquette deems this a frank, acceptable lie, a recoding that blunts the truth that one person has refused to speak to another.

I have always been uncomfortable with that, probably because I too readily imagine myself being refused. It seemed to me that cell-phone technology was muddling social signals, putting both too much and too little ego at stake in the effort to communicate. To restore clarity, I thought it might be morally useful for everyone to unambiguously experience the full weight of their refusing, and then maybe they would do less of it. (I believed somewhat naively that no one thrived on the capricious power to reject.) Naturally I overlooked how much refusing I do. When I didn’t have a cell phone, I operated under the illusion that my various refusals were disguised by the “not at home” fiction; now I am in the position of having to feel like one of those drawing-room snobs every time I ignore a cell-phone call. But I don’t want to be the snob, I protest to myself — the snobs are supposed to be picking on me. I’m the underdog! I’m the underdog!

If anything, social media have made our responsibilities toward those friends trying to reach us even murkier. Some of the success of those platforms must be attributable to how they ease the pain of refusing people. When you are socializing in a broadcast medium, you don’t have to refuse anyone. To accept a friend request on Facebook, for instance, sends out a self-satisfying burst of good will and burdens us very little; at worst, we may have to defriend the person later, an invisible action that the defriended may never notice. The awkwardness of building cliques is displaced to the medium and becomes part of the platform’s functionality, how it allows you to regroup friends and filter the results of their gross social product online.

And increasingly, Facebook is performing the social filtering for us, absolving us of the guilt implicit in that as well. Social media structures communication between friends so that the responsibility for listening — inescapably built into earlier mediums that structured talk between friends as person-to-person — is modulated into a vaguer injunction to respond if and when you feel like it. Because status updates and the like are not addressed to anyone specific, they don’t generate an obligation in anyone specific to pay attention. The messages instead compete in an attention marketplace that Facebook’s interface creates and moderates, in which the currency is comments and “likes” and the other click-driven responses that the company can measure and process algorithmically. The results of that process — which is explored in this Daily Beast article by Thomas E. Weber — determine which messages will be featured prominently in friends’ newsfeeds, and which ones will be suppressed.

The algorithms that Facebook uses to generate the newsfeed it presents to users do a lot of pre-emptive refusal for us — we don’t see a list of all our friends’ updates in reverse chronological order. The platform filters out materials it has determined will be less compelling to us (or someone in our friend pool or who is statistically similar to us, at least). Weber points out that posts that others have responded to are more likely to show up in news feeds. He advises that you “try to get a few friends to click like crazy on your items” if you want to show up in your friends’ newsfeeds more regularly.

But Facebook’s sense of our interests is only one of its filtering criteria. Facebook, Weber explains, prioritizes updates that prompt “user engagement” (links requiring clicks) over ones that are just thoughts or ideas, not links or photos. Pushing updates that require clicks suits its own purposes of keeping us logged in and generating data trails it can use. On Facebook, our communication is assessed like online advertising — how many click-throughs did it inspire? This prompts us to make what we say on Facebook sound like advertising discourse. And if the way we talk about ourselves is to a large degree who we think we are, this would not bode well for human dignity.

Since these algorithms essentially adjudicate sociality, it pays to shape what we say to suit them. This is part of how the form of social media shapes the content: our identity. “For average users, cracking the Facebook code is something of a fun puzzle,” Weber writes, “but for marketers trying to tap Facebook — or individuals who see the service as a way to promote themselves — understanding how content propagates through the system is anything but a game.” But Weber has set up a false distinction: Facebook systematically obliterates the distinction between “average users” and “marketers” and makes all friendship into a game of self-promotion, a struggle to get noticed. Its algorithms require a search-engine optimization of the self, a sanitized version that reveals to friends not who you are, what you’ve been thinking about and what you want to share, exactly, but instead shows how well you’ve been able to package yourself as an attention-getting brand. Self-expression is insufficient for sustaining friendship in the realm of Facebook; you need to be selling something, and you need the numbers on your side to make an impact. The self disappears into an endless sales pitch.

Facebook makes a market in friendly discourse and skews it so that it, as broker, always comes out ahead. But in other ways, the friend market functions like most others: it depersonalizes exchange and reduces transaction costs, thereby increasing the number of exchanges that occur. Accordingly, the volume of friend communication we consume thanks to Facebook has increased exponentially. But we have next to no ethical obligation with regard to any of it — that’s understood by all parties entering into Facebook’s market. We are obliged only to be rational maximizers, like we are in ordinary markets.

But what has radically changed is the nature of friendship, which once upon a time was something intended specifically as a bulwark against depersonalization, against market logic. With Facebook, the consumerist allure of “more, faster” fuses with a closely related moral cowardice about rejecting people to drive us en masse to the platform, bring the efficiencies of commercialization right into the heart of our social lives. With friendship in play as an alienated revenue stream, we must retreat even further into our private lives to find a haven from commercialization, to preserve the disappearing self. Soon we’ll have to seek refuge in that evocation of the “blissful isolation of intra-uterine life” as Freud called it — the “total narcissism” of sleep, where our gadgets can’t reach us.

Resisting the Chilling Effect

Images by Barbara Kruger

Abrahamian explains why a salon revival may be the best way to fight epistemic closure in the digital age— where the internet is a medium that never forgives, and never forgets.

By Atossa Abrahamian

Before the digital age, we took for granted that a social faux-pas, a bad night out, a simple mistake would be eventually forgotten; in fact, our youth was wagered on the premise that awful haircuts and embarrassing behavior in our formative years wouldn’t be held against us later on. Forgetting allowed us to fully experience the lexicon of adolescence: we grew out of the tantrums and into our noses; we grew tired of the circus and fond of the theater; in short, we were given the freedom to grow up and not look back. 

Today, we air our dirty laundry in real-time; we haven’t even a moment to reflect before it becomes a part of our virtual D.N.A. In The New York Times Magazine’s July 19 cover story, “The End of Forgetting,” Jeffrey Rosen laments the Internet’s capacity to systematically archive personal identity, arguing that the threat of lifelong searchability makes it increasingly difficult to reinvent ourselves and that our society is experiencing a “collective identity crisis” as a result.  He invokes a familiar cast of characters in the narrative of online privacy: the schoolteacher fired for her Facebooked drunkenness; the teenager laid off after admitting she was “so totally bored!!!,” the Canadian senior citizen who once dabbled in hallucinogenic drugs and now cannot cross the U.S. border. These stories were once quite shocking; today we hardly bat an eye and blame the subjects for their lack of savvy. 

We can add to that list the participants in Ezra Klein’s Journolist. In early 2007, Washington Post journalist Ezra Klein started the e-mail forum to discuss politics with colleagues and acquaintances off the record. According to Klein, the list was meant to use the Internet’s connective capacities to create “an insulated space where the lure of a smart, ongoing conversation would encourage journalists, policy experts and assorted other observers to share their insights with one another.” Membership gradually grew to include 400 prominent writers, editors, academics and reporters, for whom the list was something of a safe haven – a place to talk without the burden of adding to the public record or putting their career at stake. 

But in a trackable, Googleable, searchable world, it has become very easy to speak one’s mind, but increasingly difficult to change it. Journolist was originally meant to remedy this. Its informal, spontaneous, and spirited e-mail discussions ranged from health-care reform to foreign policy to local news. Though most members admittedly leaned toward the left, they still frequently disagreed. This isn’t to say that the list was a raging hotbed of disputes and denunciations — Politico’s Walter Shapiro compared it to “cafeteria chatter.” Journolist, after all, was an e-mail listserv, tedious to read and all-too-easy to ignore. Even so, Journolist allowed members to test out half-baked ideas for soundness before transferring their thoughts to their columns, articles, blogs, or lectures. Privacy, wrote Klein, was “essential” and could “only be guaranteed by keeping these conversations private.”

The list was active until early this summer, when an anonymous member leaked to FishbowlDC and The Daily Caller, a conservative online journal, strings of emails that Washington Post reporter Dave Weigel had written to the group. Weigel’s private messages, as well as retorts from other members, were published without context, purporting to expose Weigel – who had just been given a Post column — as biased and unfair. Enraged conservative commentators labeled Journolist a “liberal conspiracy.” Weigel resigned (he now writes for Slate.com), and Klein shut down the forum the same day. Journolist’s downfall suggests that despite the frequent celebration of the Internet as an interactive, democratizing medium — a digitized public sphere, a meritocratic forum of ideas – it is a medium that never forgives, and never forgets. 

Imagine that you are struck with a curious and rather outlandish thought that you don’t necessarily want to defend for the rest of your life but nonetheless feel compelled to share — if only to see how others react. You write an impassioned paragraph detailing your idea but pause before posting the entry online. Without an editor, you are the sole judge of the quality of your work, and you are aware that by hitting the “publish” button, you would be subjecting yourself to possibly harsh, often anonymous criticism. You must take into account the Internet’s systemic indelibility: that henceforth your entire intellectual stance could be defined by a single, possibly ill-conceived argument. 

So you vacillate between killing the conversation before it begins or risking becoming “misrepresented and burned,” as Rosen puts it. Inevitably, this ends in some form of self-censorship. Not every form of online self-censorship is necessarily harmful —  if people stopped documenting such things as the intimate details of their breakfast,  society would hardly be worse off — but aside from oversharing, what will happen to plain sharing? Withholding inane tweets or provocative photos is one thing, but what will become of intelligent exchange and thoughtful conversation? 

In 1981 Jürgen Habermas outlined the ideal circumstances for speech in the “common competitive quest for truth.” Every competent actor must be able to take part in a discourse, introduce and question every assertion, and introduce his attitudes, desires, and without any internal or external forces preventing him from doing so. But because the Internet never forgets, these conditions are impossible to recreate online; its restrictive forces exist not just in the moment but in the indeterminate future too. As a result, ideas become crystallized, locked in time, and indelible.

But ideas are not by nature static; to think and to discuss is to invariably change one’s mind. When mercurial acts of spontaneous thinking and feeling move online, they become not just a matter of starting a conversation and nursing a spark of thought between people, but a matter of brand-building, networking, even dating. Online, ideas are permanent; they have no room to change or grow. In The Structural Transformation of the Public Sphere Habermas argued that individuals need an element of inconsequence in order for ideas to “become emancipated from the bonds of economic dependence.” But at present, ideas are currency, and their exchange a calculated and Machiavellian act. 

The online forum is thus overwhelmed by the monetization of information. The lure of instant publication and the dizzying proliferation of gossip gone viral are hard to resist; most publications, hungry for content and desperate for traffic, have no qualms about printing invasive material. This leaves us in an Orwellian atmosphere of suspicion and betrayal. If confidentiality is contingent on not receiving a tempting enough offer, whether it take the form of money, notoriety, or the thrill of shaping Twitter’s trending topics, then who really can be trusted with our digital communications? Post-Journolist, the idea of “private e-mail” is beginning to sound like an oxymoron, and the mandate of constant self-censorship is spreading accordingly.

An example of online mundanity can be found in social media platforms like Tumblr. Tumblr is organized hierarchically – blogs are not valued for the quality of the ideas, but for the amount of “followers” and “re-blogs” they receive. Until recently, users were even ranked by their “tumblarity” – an index based on how much content was generated and re-generated. This encourages the proliferation of unchallenging content, often in the form of puppy photos, 1990s nostalgia, and snark – in short, mere entertainment. Ideas that ought to be seen once and destroyed are now produced at an alarming rate, but no longer have the privilege of disappearing.

If we want to encourage the exchange of ideas, we should not worry, at age 24, if what we say tomorrow will be of any particular consequence ten years later. The need for a private space for conversation without the weight of the web-wide world on our shoulders is more pressing than ever. We need a space to ask questions that Google already has the answers to; where it’s alright make comments only to later take them back; where it’s encouraged to agree, to disagree, to argue, and to play devil’s advocate, all for the sake of the argument. Now more than ever, we need a room of our own to change our minds as often as we choose in good faith.  

The concept of the salon – the original journolist, if you will - is a viable alternative to online idea-sharing. Habermas used French salons from the 18th century as examples of “organized discussion among private people” to counteract the insidious forces of the mass media (which, at the time, took the form of our beloved free press.)  Loosely guided, spontaneous discussion amongst friends and acquaintances will do more for our minds and imaginations than any number of re-blogs and re-tweets; it will offer a means to counteract the Internet’s “chilling effect” on all peoples’ expressive capacities, and the resultant foreclosure of personal and intellectual serendipity, new ideas, and creative connections. Getting off the Internet might seem a regressive, conservative, and dull suggestion; it is, by all means, a disappointing conclusion to a great experiment in human connectivity. But it is necessary to take conservative action in the name of progress. A small step back could make room for many great leaps forward.

___

The New Inquiry hosts an ongoing salon in New York City. To learn more and stay updated on news and events, sign up for our newsletter

Rapture, Rainbows & Reproduction (Part Deux)

In light of the new Microsoft promo featuring beloved YouTube superstar “Double Rainbow Man,” it seems appropriate to give Rob Horning’s theory of the experiential feedback loop new life. 

Horning writes,

Our imagination can only be staggered by limitlessness if it attempts to take it seriously rather than cut it down to the imagination’s size.The internet, and the digital reproduction and reduplication of experience that fuels it, tempts us into this contemplation of the infinite, the exhilarating abyss that pulls us in.

 Of course, Microsoft’s all-too-imaginable campaign strategy falls rather short of the “exhilarating abyss” so wonderfully captured by the original Double Rainbow meme. 

Far be it from me to disparage the advertising industry as such. It attracts the talent of some of our most socially attuned and artistically gifted culture-makers. But attempting to brand such an enlivening and spontaneous moment— experienced both by the howling videographer and the millions of users who made his exhilaration viral in the cultural commons— is something of a buzz kill. 

Read all of Rob Horning’s original post here

Rapture, Rainbows & Reproduction

"Recording and sharing our response to natural beauty promises a kind of feedback loop….It’s mastery and surrender simultaneously."

By Rob Horning

At first it’s all rainbows and plenitude. This internet meme (a YouTube sensation, for those of you unexposed to the current viral media blitz) of a man recording his own delirious repsonse to a double rainbow captures a culture-wide sense of joy at being able to experience miraculous nature and capture it too, master it, make it seemingly our own even as we know we are going to share it as it ours. One can readily imagine the rainbow man thinking, even in the midst of his rapture, “I can’t wait to upload this!”

Recording and sharing our response to natural beauty promises a kind of feedback loop; it rechannels our reactions back to ourselves almost as they are occurring and starts them echoing. The response is amplified with the addition of its replication, becoming ever more intense, until we’re sobbing and shouting, “It’s so beautiful! Oh, my God!” into the wind. It’s mastery and surrender simultaneously.

Giving ourselves over to the internet can evoke that same unstable mixture of feeling. We can be overwhelmed by all the available information even as it seems we can make it dance at our command. The apparent miracles of discovery, of uncovering details you can’t believe someone else thought to organize, of stumbling on the fact that some piece of cultural detritus from your youth that was so inane and obscure, you thought it was virtually a private memory, and finding it lovingly restored, preserved and shared. It can feel like conjuring magic, the ability to think of something hovering on the verge of being forgotten and immediately summon it in digital form. We can pick our way along lines of inquiry not merely for as long as we have the imagination to invent new connections but for as long as we tolerate all the connections the network throws back at us, trusting them to carry us along, on from the desire for any final destination. Inquiry dissolves into raw curiosity, or into the mirage of pure invention. Natural beauty is almost superfluous. Instead, beauty on demand.

But eventually our interaction with the internet precipitates a crisis. It begins to dawn on us that we pilot our way through the infinite alone, like Carl Sagan in his dandelion-seed space capsule in Cosmos, passing endlessly through billions and billions of galaxies, trillions and trillions of solar systems, looking for new life, or life that is new to us. And it’s lonely out in space. We look through the telescope to the edge of the visible universe, only to see that there is so much more beyond that that we will never be able to glimpse. At some point the game of pretending to assimilate infinity ceases to amuse and becomes a bit terrifying. In Pensée 72, Pascal attempts to describe how humbling the infinite can be:

Let man then contemplate the whole of nature in her full and lofty majesty, let him turn his gaze away from the lowly objects around him; let him behold the dazzling light set like an eternal lamp to light the universe, let him see the earth as a mere speck compared to the vast orbit described by this star, and let him marvel at finding this vast orbit itself to be no more than the tiniest point compared to that described by the stars revolving in the firmament. But if our eyes stop there, let our imagination proceed further; it will grow weary of conceiving things before nature tires of producing them.

Our imagination can only be staggered by limitlessness if it attempts to take it seriously rather than cut it down to the imagination’s size. The internet, and the digital reproduction and reduplication of experience that fuels it, tempts us into this contemplation of the infinite, the exhilarating abyss that pulls us in.

Taste & Algorithms

Ted Striphas, author of The Late Age of Print blog and book, has an interesting piece up on “algorithmic culture.” He describes how this form of culture-production operates by discussing how Amazon decides to position books:

So, every time a person browsing the web clicks from book to another on Amazon and ends up buying the second, or every time a Kindle user highlights a specific phrase or set of words in a book, they are adding material to Amazon’s database. All of this is later worked to determine what matters—which in turn will help determine the shape of future data. If you see that 85% of people bought the second item on the list, you might be more inclined to consider it before simply buying what you’re browsing. And so on.

This method of determining which books are meaningfully related seems to be fundamentally opposed to the old method of deciding what relates to what, or what people ought to buy, or what you might like, which Striphas calls “elite culture.” In elite culture, “a small group of well-trained, trusted authorities determined not only what was worth reading, but also what within a given reading selection were the most important aspects to focus on.” Algorithmic culture apparently—Striphas stresses it is only apparently: a well-trained, small group of people still writes the algorithm that does the data work—does away with elite culture, because it makes everyone’s data equal. What matters for number crunching are broad trends within data, not the credentials of the critic who’s ranting or raving about a given book.

Which might be praised as a triumph of everyone against the snobs: it seems that tastemakers no longer make taste. “The crowd” does, simply be the way its inclinations overlap, combine, become statistically meaningful. And, seemingly, this new snapshot of trends in reading, buying, surfing, highlighting, and other digital activities would be one more faithful to everyone in the crowd’s habits. But, things are at times not what they seem. Because when people’s reading habits are converted into data to be mined, they lose all distinguishing characteristics. Striphas notes,

When people read, on a Kindle or elsewhere, there’s context.  For example, I may highlight a passage because I find it to be provocative or insightful.  By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or…you get the point.  When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost.  What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.

    Context is golden, and it’s currently presented by algorithmic culture in a very tarnished way. Of course, in the future, as more and more people posit more and more sharply defined images of themselves into the devices that track their activities and map their habits, algorithms may be written that determine whose activities are more culturally meaningful, and that attempt to gauge their context: these future algorithms might identify that this underlining was done by a psychology undergrad who 20 minutes ago tweeted “Final tomorrow, sukkage. Hope being drunk will make reading interesting.” But maybe not. Then there is the problem of the people who act outside the digital loop: when I underline a passage in a pulp-and-fiber paperback, that significant little datum is not streamed to Amazon. The determination of cultural relevance will remain (at least) one step removed the immediate activity of “the crowd,” but it will (as always) have a great impact on what people do.

    The New Inquiry’s visual history at a glance.
The aesthetic pleasure of unintentional juxtaposition, formerly epitomized by channel surfing, takes on the aspect of gallery curation with this gorgeous mosaic viewer. Tumblr’s platform has always had the quality of visual flow, but this content organizational tool transforms the linear weblog into a kaleidoscope. Utterly transfixing.
View some of our favorites:

i12bent
billyjane
A Journey Round My Skull
Atom Punk
Graemebooks
pdvmorris
vindsval

    The New Inquiry’s visual history at a glance.

    The aesthetic pleasure of unintentional juxtaposition, formerly epitomized by channel surfing, takes on the aspect of gallery curation with this gorgeous mosaic viewer. Tumblr’s platform has always had the quality of visual flow, but this content organizational tool transforms the linear weblog into a kaleidoscope. Utterly transfixing.

    View some of our favorites:

    i12bent

    billyjane

    A Journey Round My Skull

    Atom Punk

    Graemebooks

    pdvmorris

    vindsval

    Another Theory on the Decline of Media

    A lovers of the printed word come to terms with the potential obsolescence of traditional journalism, they may at least take solace in the fact that the rest of establishment media is on its knees alongside them.

    We agree that the Internet has caused a major media paradigm shift. But this fact should not become a scapegoat for other key ingredients of this troublesome brew. Television news (not to mention—albeit regrettably—print), has been on the decline for a while. And this is not just a statement about news credibility—it’s one of production and narrative quality.

    I submit for your consideration:

    As Jon Stewart and Stephen Colbert’s astonishing success also verifies, media is ill and humor is truth.

    I would unfriend Ovid too

    I was already pretty jazzed about The Point, but this excerpt from Jonny Thakkar’s, “The Withering of Narcissus" seals it:

    Ovid confesses he only wrote to become famous. That is why exile was a “living death” for the poet, since “writing a poem you can read to no one is like dancing in the dark. An audience stimulates brilliance, to praise a talent swells it: fame indeed is the spur.” But to win his audience’s favor, he had to write something that illuminated their lives.

    Imagine Ovid with Facebook. He could have poked his friends and thrown sheep at them. He could have exchanged virtual gifts and written on Walls. He could have cycled through the same status updates, endlessly: “Publius Ovidius Naso is-in exile / waiting to be recalled / thinking of his friends / scared of the natives / too cold for words / missing his mistress / innocent / apologetic / the best poet alive / not made for this kind of thing / enduring a living death without complaint.” Before long, Emperor Augustus would have had to unfriend him.

    Curation vs. Creation on Tumblr

    Day of the Dreamweavers: culled:

    Curation Culture

    breannetrammell:viafrank:

    Jon-Kyle Mohr posted a really thoughtful critique of the online curation culture called A Complimentary Rant on the State of Convenience. (Is curation culture a term? Can I coin that?) Anyway, Jon-Kyle’s central question:

    Why is it that with the ease of publishing available today people so often choose to re-post content as opposed to create it?

    Yesterday I saw the Gabriel Orozco retrospective at MoMA (along with the Bauhaus thang, more on that/Alma Mahler to come), and though I loved all the witty bike/car objects and enfant terrible yogurt lids and body-clay sculptures (is it just me or is he totally riffing on Louise Bourgeois with those babies?), I think my favorite was his Working Tables (2000-2005):

    From MoMA: “The tables display prototypes for finished works, the beginnings of projects never realized, and found objects the artist kept for one reason or another—all things on their way to becoming sculpture.”

    I think I found the tables fascinating because they presented these objects, some of which weren’t even Orozco’s works, with such care and intent. Crafted, selected, arranged, fetishized, the pieces tell me something about the way Orozco functions as an artist, but also something about the act of creating an artistic world to inhabit, of making a universe in miniature, of picking out the shiniest stones to keep in your hot little hand and then showing them to your friends in the yard.

    I worry about the echo chamber of tumblrs and their ilk and the meaningless repetition and amplification of digital objects. I’m obsessed with the way that people collect, hoard, and re-broadcast photos and music and words without also creating their own. I’m not saying every tumblr reblogging pictures of hot girls in kitten earmuffs or grainy photos of Parisian cafes is as intentional and special as Orozco’s working tables, but the impulse, I think, is similar. We are overwhelmed, and if we can pick and chose a few objects that we like, put them in a place where we can keep them, it helps us to exercise some kind of control over the flood, even if it leads to visual/aural/literary ADD and a tawdry kind of exhibitionism: look at all these things I found. But while I’d rather not bother with some peoples’ online collections, I think some are interesting as works in progress, and some seem like ready-made archives, perfect and complete.

    ***

    The New Inquiry:

    Yes. This is a fascinating media trend to consider, especially in light of Michael Goldhaber’s lengthy study, “The Value of Openness in an Attention Economy,” which suggests that we should reverse our sense of the consumer/producer relationship in an Attention Economy.

    Consumer attention—not information production— is the valuable (thus tradeable) commodity in a New Media age.

    Due to the limits of time and consciousness, our attention is by definition scarce. The websites, blogs, television, mainstream media, newspapers, social networks, start-ups and more producing original content have an extremely high demand for our attention. If we choose to pay attention (the pun takes on a paradoxical meaning in this context), we will do so only to those who give us the highest return for it.

    In other words, why would I read a blog post, when I could instead refer to my Tumblr dashboard  and scroll the haze of recommended content from users I “follow” and thus (to some extent) trust the opinions of. Tumblr is a genius platform because it builds itself on curations of curations of curations—maximizing return in the attention economy.

    On Tumblr, consumers become producers by merely regurgitating what is digested.

    This same logic is manifest in the trend toward Meta-Vendors such as Amazon.com, which provide consumers with everything they could possibly wish to purchase in one spot. Sheer girth becomes a crucial service, where the efficiency of the shopping experience must provide a higher return for attention paid. By this model, market competition becomes virtually impossible.

    Consider, for example, John Cassidy’s response in The New Yorker to Chris Anderson’s otherwise extremely compelling Long Tail theory:

    The long tail has meant that online commerce is being dominated by just a few businesses—mega-sites that can house those long tails. Even as Anderson speaks of plenitude and proliferation, you’ll notice that he keeps returning for his examples to a handful of sites—iTunes, eBay, Amazon, Netflix, MySpace. The successful long-tail aggregators can pretty much be counted on the fingers of one hand.

    And yet, this is only true if success is measured exclusively by commercial gain. The joy of sharing itself is proving to be motivation enough for most users (see: Wikipedia editors). Once individual information-consumers/attention-producers locate small passion-driven communities that both manage and generate information effectively, the Meta-Vendors and mass aggregation will lose all cultural influence on these individuals (if not their business). We might even say the Attention Economy—for all its shortcomings—might enable the re-emergence of a virtual Bohemia.

    The Attention Economy is the theory behind our sister site, TNI | Syllabus. While The New Inquiry emphasizes the open generation of collaborative, original content distributed on a Tumblr platform, TNI | Syllabus is a user-generated, submission-based, curation of “the best of the internet” as chosen by TNI readers. Something like an attention commune.

    Though it wasn’t my intent to plug TNI when I began this post, but it seems like an appropriate moment to say: If you are following us here, you might want to follow the syllabus as well—and if you haven’t yet, consider contributing  to both.