The Lords of the Flies: American Collapse’s Lesson for History and the World

My American friend Tucker stays late every night at his highly professional job every night and arrives early every morning. He’s not paid for it. He’s just expected to do it. I ask him, as someone who studies management and leadership, who tells him to. No one, he says. The expectation is just there. Lingering in the air, like an unspoken threat.

We often say that America is an experiment. But what is it an experiment in? Some will say “freedom”, but you can’t really say a country that’s been unsegregated for less than 25% of its history is an experiment in freedom. I think America is an experiment of a different kind. One that reveals a great truth about political economy to history and the world.

It is an experiment in the survival of the fittest.

America is a Darwinian organization. There are many kinds of organizations. Not all are Darwinian. Some are what I’d call Dionysian, like a nightclub. Some are Apollonion, aimed at achievement, like a great university. Only some are Darwinian — devoted to the survival of the fittest.

That phrase describes American history, don’t you think? First blacks and natives were dehumanized — they were “fit” only for hard labour, morally defective. Then immigrant whites of all kinds were too, “fit” only for menial jobs. Wherever you look at American history, you’ll see this idea of the survival of the fittest — all the way down to today, when the poor and weak are expected to simply and quite literally die in the streets, and the strong — the famous, the adored, the powerful — are rewarded by being allowed to take all. Why?

The idea behind all this — when there was a justification, that is — was that the rise of the fittest would somehow benefit everyone. It would yield superpeople: smarter, nicer maybe, stronger, better. Call it a trickle down theory of human potential. But it didn’t (that’s self-evident: every indicator of a good life is falling).

Instead, it yielded something else entirely: superpredators. What does the survival of the fittest yield in nature? It yields better and better predators. Evolution went from bacteria to jellyfish to sharks with giant teeth. The same is true in society. America has bred a new class of people: superpredators. They are congressmen who can throw tens of millions off healthcare without any moral concern. They are the super rich who watch a nation’s life expectancy fall and laugh. I’m not really judging them — OK, maybe a little. But mostly, I’m observing. And here’s what I see.

Remember my friend Tucker? America is a land ruled by little bullies now. Just think about it with me before you react patriotically. Americans are told by screaming bullies on the news what to think. They are hounded by bullying debt collectors owned by bullying banks. Their politicians who don’t represent them bully them into cowed submission, though they live the poorest lives in the rich world. They are bullied by bosses into overwork for little pay and almost no leisure time. It goes on and on and on.

The bullies, in turn, are ruled by bigger and bigger bullies. Until we get to the biggest bullies of them all — and right now, it’s painfully self-evident who they are. The biggest bullies. Superpredators. It’s not a coincidence. The experiment worked. But not in the way its architects intended. It didn’t end in superpeople.

Superpredators are what social survival of the fittest yields. Just as natural evolution yields sharks with bigger teeth. And when we look carefully, America is a society that prizes an evolutionary paradigm above all. People should “adapt” to “changes” in their “environment”. “Innovation”, “change”, “transformation” are all ways that the economy “evolves”. The result is a society that produces stronger, crueller, meaner predators.

But better predators are not people who are better human beings. So a “fitness criterion”, as biologists call it, some measure of selfish success, whether it is profit or baronial titles, isn’t sufficient to evoke human potential. Why not?

Do you remember William Golding’s Lord of the Flies? It’s one of my favourite books. And while you might think it’s about being kids being abandoned, I think there’s more to it. I think it’s a parable about the survival of the fittest. The boys kill Piggy — and that is when they lose their moral souls. Their little society, too, is Darwinian. Yet it doesn’t lead them anywhere but into the abyss.

Golding knew something that we have forgotten. Civilization, a process, a project, must — must — reject the Nietzschean idea of the survival of the fittest. It must prize greater things in human beings. The ability to dream, defy, love, forgive, create, rebel. That is where real human breakthroughs come from, whether in art, literature, science, or politics. That is where peace and prosperity, lie. To genuinely value human potential, life, possibility, is the opposite of the survival of the fittest.

Evolution is not a good answer to the questions of social organization and human potential. It can go in many directions. It can make dinosaurs, sharks, and only sometimes human beings with moral concerns. And even then the moral concerns of human beings must go against their evolutionary prerogatives, their animal spirits. Thus, when a quest for “fitness” makes carnivores of men, then society must be protected from it — not harnessed toit.

To apply the rule of the survival of the fittest to a society, to let evolution blinfly take its course, will naturally end in America. A land ruled by superpredators, where the average life has no hope left of fully living. The sharks, now that they have been bred, feast on the fish.

That is American collapse’s real lesson for history and the world.

Umair
June 2017

View at Medium.com

It’s not an attack on the arts, it’s an attack on communities

Art and architecture critic March 16 at 3:03 PM
Things could get worse, much worse. The president’s proposed budget eliminates much of the government’s long-standing commitment to the arts, to science, to education, to culture, to public broadcasting and community development. It calls not only for the elimination of the National Endowment for the Arts, the National Endowment for the Humanities, the Corporation for Public Broadcasting and the Institute of Museum and Library Services, but also proposes the elimination of groups such as the Woodrow Wilson Center, a highly respected think tank that studies national and international affairs and just happens to be hosting a program Thursday called “The Muse of Urban Delirium: How the Performing Arts Paradoxically Transform Conflict-Ridden Cities Into Centers of Cultural Innovation.” It’s almost as if someone tried to fit as many dirty words — dirty in the current administration’s way of thinking — into one evening: Arts, Cities, Culture, Paradox, Innovation.

These cuts aren’t about cost savings — they’re far too small to make even a ding in the federal budget. They are carefully calculated attacks on communities, especially those that promote independent thinking and expression, or didn’t line up behind the Trump movement as it swept to power through the electoral college in November. But the president’s proposed budget also includes attacks on communities that did indeed support Trump but that are too powerless to resist. Among the independent agencies set for elimination: the Appalachian Regional Commission, which supports things such as job training, economic diversification (including the arts), tourism initiatives and Internet access in states like West Virginia, Alabama and Kentucky.

The strategy, perfectly calculated for a new era of rancor and resentment amplified by social media, is to focus people not on what will be lost, but who will lose. Why attack communities that support you? Because losing isn’t just a question of what side, what arguments, what ideology prevails in the political debate. Rather, losing is a stigma, a scarlet letter to hang on the necks of people who are losers. Losers are essential to the project of building a new political coalition, a coalition that celebrates winning. Winners are strong; losers are sad. If your aversion to being branded a loser is strong enough, you may even embrace policies that cause you harm.

President Trump’s proposed budget calls for the elimination of the National Endowment for the Arts, the National Endowment for the Humanities, the Institute of Museum and Library Services, and the Corporation for Public Broadcasting. Small and rural programs would be hit hardest. (Erin Patrick O’Connor/The Washington Post)

Read through The Washington Post’s coverage of the budget proposal, and you hear what begins to sound like a broken record: These cuts will primarily affect marginalized or minority communities, people on the losing end of the American Dream. From an article about the Interior Department: “Historic-sites funding is important,” according to one expert, “because it supports tribal preservation officers and provides grants to underrepresented communities.” Or from the Labor Department: “The Trump administration proposed $2.5 billion in cuts for the Labor Department in a plan that would significantly reduce funding for job training programs for seniors and disadvantaged youth.”

Just in time for today’s announcement is an op-ed by Washington Post columnist George Will, who also calls for the elimination of the NEA. Will’s article would be a risible period piece — he is still seething over culture-war debates from more than a quarter century ago — if his hostility to the arts were not politically empowered by the democratic peculiarities of the last election, which brought into office a deeply unpopular president allied (for now) to a Congress pursuing deeply unpopular policies because many of its members are protected by gerrymandering.

Will rehashes the usual arguments: He reminds readers of a handful of grants that were deemed offensive by some in the early 1990s; he asserts that people will pay for the arts if they want the arts, and that state and local arts agencies will step up if the federal government (which helps fund these agencies) forsakes them; and argues that the arts are no different, no more a social good, have no more utility or spiritual value than “macaroni and cheese.” He not only fails to understand the nature of the arts, he also fails to understand the uniquely American three-legged stool system of federal stimulus allied to state and local support and bolstered by private donations that has enriched the arts and the country for more than half a century.

“The myriad entities with financial interests in preserving the NEA cloyingly call themselves the ‘arts community,’ a clever branding that other grasping factions should emulate,” he writes, cloyingly. “The ‘arts community’ has its pitter-patter down pat. The rhetorical cotton candy — sugary, jargon-clotted arts gush — asserts that the arts nurture ‘civically valuable dispositions’ and a sense of ‘community and connectedness.’ And, of course, ‘diversity’ and ‘self-esteem.’ ”

The arts have a powerful economic effect on our society and employ vast numbers of people, but the arts community is hardly an assemblage of cynical, self-interested, deep-pocketed financial interests (for that, look to the president’s Cabinet). The “pitter-patter” of this rapacious arts juggernaut is indeed well practiced by now, but only because attacks on the arts are now a seasonal performance from a determined minority political faction. The arts do indeed foster a sense of “community and connectedness” . . . in places like Nebraska, Alaska, Missouri, Nevada, Georgia, Tennessee and Alabama. And the other 43 states of the Union. And not only do they nurture diversity, they also express and preserve the variegated richness of culture celebrated in that musty old Latin phrase “E pluribus unum” (it’s on the money, if you want to check).

But the most jejune moment of Will’s extraordinary performance is this: “What, however, is art? We subsidize soybean production, but at least we can say what soybeans are.” For a few centuries now, it has been the nature of art to wonder what art is. That’s how the arts think, how they operate, how they define the parameters of aesthetic experience. And for the entire history of the species, art has been fundamentally different, less tangible, less utilitarian in its function, than soybeans. These things are obvious, if you’ve ever spent time with the arts community, which in fact exists and adds immeasurably to the stability, cohesion, intelligence, beauty and resilience of the nation.

How technology shrunk America forever

The end of the Old World:

The 19th century saw an explosion of changes in America. The way people saw the world would never be the same

The end of the Old World: How technology shrunk America forever
(Credit: AP/Library of Congress)

It has become customary to mark the beginning of the Industrial revolution in eighteenth-century England. Historians usually identify two or sometimes three phases of the Industrial revolution, which are associated with different sources of energy and related technologies. In preindustrial Europe, the primary energy sources were human, animal, and natural (wind, water, and fire).

By the middle of the eighteenth century, much of Europe had been deforested to supply wood for domestic and industrial consumption. J.R. McNeill points out that the combination of energy sources, machines, and ways of organizing production came together to form “clusters” that determined the course of industrialization and, by extension, shaped economic and social developments. a later cluster did not immediately replace its predecessor; rather, different regimes overlapped, though often they were not integrated. With each new cluster, however, the speed of production increased, leading to differential rates of production. The first phase of the Industrial revolution began around 1750 with the shift from human and animal labor to machine-based production. This change was brought about by the use of water power and later steam engines in the textile mills of Great Britain.

The second phase dates from the 1820s, when there was a shift to fossil fuels—primarily coal. By the middle of the nineteenth century, another cluster emerged from the integration of coal, iron, steel, and railroads. The fossil fuel regime was not, of course, limited to coal. Edwin L. Drake drilled the first commercially successful well in Titusville, Pennsylvania, in 1859 and the big gushers erupted first in the 1870s in Baku on the Caspian Sea and later in Spindeltop, Texas (1901). Oil, however, did not replace coal as the main source of fuel in transportation until the 1930s.3 Coal, of course, is still widely used in manufacturing today because it remains one of the cheapest sources of energy. Though global consumption of coal has leveled off since 2000, its use continues to increase in China. Indeed, China currently uses almost as much coal as the rest of the world and reliable sources predict that by 2017, India will be importing as much coal as China.



The third phase of the Industrial revolution began in the closing decades of the nineteenth century. The development of technologies for producing and distributing electricity cheaply and efficiently further transformed industrial processes and created the possibility for new systems of communication as well as the unprecedented capability for the production and dissemination of new forms of entertainment, media, and information. The impact of electrification can be seen in four primary areas.

First, the availability of electricity made the assembly line and mass production possible. When Henry Ford adapted technology used in Chicago’s meatpacking houses to produce cars (1913), he set in motion changes whose effects are still being felt. Second, the introduction of the incandescent light bulb (1881) transformed private and public space. As early as the late 1880s, electrical lighting was used in homes, factories, and on streets. Assembly lines and lights inevitably led to the acceleration of urbanization. Third, the invention of the telegraph (ca.1840) and telephone (1876) enabled the communication and transmission of information across greater distances at faster rates of speed than ever before. Finally, electronic tabulating machines, invented by Herman Hollerith in 1889, made it possible to collect and manage data in new ways. Though his contributions have not been widely acknowledged, Hollerith actually forms a bridge between the Industrial revolution and the so-called post-industrial information age. The son of German immigrants, Hollerith graduated from Columbia University’s School of Mines and went on to found Tabulating Machine Company (1896). He created the first automatic card-feed mechanism and key-punch system with which an operator using a keyboard could process as many as three hundred cards an hour. Under the direction of Thomas J. Watson, Hollerith’s company merged with three others in 1911 to form Computing Tabulating recording Company. In 1924, the company was renamed International Business Machines Corporation (IBM).

There is much to be learned from such periodizations, but they have serious limitations. The developments I have identified overlap and interact in ways that subvert any simple linear narrative. Instead of thinking merely in terms of resources, products, and periods, it is also important to think in terms of networks and flows. The foundation for today’s wired world was laid more than two centuries ago. Beginning in the early nineteenth century, local communities, then states and nations, and finally the entire globe became increasingly connected. Though varying from time to time and place to place, there were two primary forms of networks: those that directed material flows (fuels, commodities, products, people), and those that channeled immaterial flows (communications, information, data, images, and currencies). From the earliest stages of development, these networks were inextricably interconnected. There would have been no telegraph network without railroads and no railroad system without the telegraph network, and neither could have existed without coal and iron. Networks, in other words, are never separate but form networks of networks in which material and immaterial flows circulate. As these networks continued to expand, and became more and more complex, there was a steady increase in the importance of immaterial flows, even for material processes. The combination of expanding connectivity and the growing importance of information technologies led to the acceleration of both material and immaterial flows. This emerging network of networks created positive feedback loops in which the rate of acceleration increased.

While developments in transportation, communications, information, and management were all important, industrialization as we know it is inseparable from the transportation revolution that trains created. In his foreword to Wolfgang Schivelbusch’s informative study “The Railway Journey: The Industrialization of Time and Space in the 19th Century,” Alan Trachtenberg writes, “Nothing else in the nineteenth century seemed as vivid and dramatic a sign of modernity as the railroad. Scientists and statesmen joined capitalists in promoting the locomotive as the engine of ‘progress,’ a promise of imminent Utopia.”

In England, railway technology developed as an extension of coal mining. The shift from human and natural sources of energy to fossil fuels created a growing demand for coal. While steam engines had been used since the second half of the eighteenth century in British mines to run fans and pumps like those my great-grandfather had operated in the Pennsylvania coalfields, it was not until 1901, when Oliver Evans invented a high-pressure, mobile steam engine, that locomotives were produced. By the beginning of the nineteenth century, the coal mined in the area around Newcastle was being transported throughout England on rail lines. It did not take long for this new rapid transit system to develop—by the 1820s, railroads had expanded to carry passengers, and half a century later rail networks spanned all of Europe.

What most impressed people about this new transportation network was its speed. The average speed of early railways in England was twenty to thirty miles per hour, which was approximately three times faster than stagecoaches. The increase in speed transformed the experience of time and space. Countless writers from this era use the same words to describe train travel as Karl Marx had used to describe emerging global financial markets. Trains, like capital, “annihilate space with time.”

Traveling on the recently opened Paris-rouen-orléans railway line in 1843, the German poet, journalist, and literary critic Heinrich Heine wrote: “What changes must now occur, in our way of looking at things, in our notions! Even the elementary concepts of time and space have begun to vacillate. Space is killed by the railways, and we are left with time alone. . . . Now you can travel to orleans in four and a half hours, and it takes no longer to get to rouen. Just imagine what will happen when the lines to Belgium and Germany are completed and connected up with their railways! I feel as if the mountains and forests of all countries were advancing on Paris. Even now, I can smell the German linden trees; the North Sea’s breakers are rolling against my door.” This new experience of space and time that speed brought about had profound psychological effects that I will consider later.

Throughout the nineteenth century, the United States lagged behind Great Britain in terms of industrial capacity: in 1869, England was the source of 20 percent of the world’s industrial production, while the United States contributed just 7 percent. By the start of World War I, however, america’s industrial capacity surpassed that of England: that is, by 1913, the scales had tipped—32 percent came from the United States and only 14 percent from England. While England had a long history before the Industrial revolution, the history of the United States effectively begins with the Industrial revolution. There are other important differences as well. Whereas in Great Britain the transportation revolution grew out of the industrialization of manufacturing primarily, but not exclusively, in textile factories, in the United States mechanization began in agriculture and spread to transportation before it transformed manufacturing. In other words, in Great Britain, the Industrial Revolution in manufacturing came first and the transportation revolution second, while in the United States, this order was reversed.

When the Industrial revolution began in the United States, most of the country beyond the Eastern Seaboard was largely undeveloped. Settling this uncharted territory required the development of an extensive transportation network. Throughout the early decades of the nineteenth century, the transportation system consisted of a network of rudimentary roads connecting towns and villages with the countryside. New England, Boston, New york, Philadelphia, Baltimore, and Washington were joined by highways suitable for stagecoach travel. Inland travel was largely confined to rivers and waterways. The completion of the Erie Canal (1817–25) marked the first stage in the development of an extensive network linking rivers, lakes, canals, and waterways along which produce and people flowed. Like so much else in America, the railroad system began in Boston. By 1840, only 18,181 miles of track had been laid. During the following decade, however, there was an explosive expansion of the nation’s rail system financed by securities and bonds traded on stock markets in America and London. By the 1860s, the railroad network east of the Mississippi river was using routes roughly similar to those employed today.

Where some saw loss, others saw gain. In 1844, inveterate New Englander ralph Waldo Emerson associated the textile loom with the railroad when he reflected, “Not only is distance annihilated, but when, as now, the locomotive and the steamboat, like enormous shuttles, shoot every day across the thousand various threads of national descent and employment, and bind them fast in one web, an hourly assimilation goes forward, and there is no danger that local peculiarities and hostilities should be preserved.” Gazing at tracks vanishing in the distance, Emerson saw a new world opening that, he believed, would overcome the parochialisms of the past. For many people in the nineteenth century, this new world promising endless resources and endless opportunity was the american West. A transcontinental railroad had been proposed as early as 1820 but was not completed until 1869.

On May 10, 1869, Leland Stanford, who would become the governor of California and, in 1891, founder of Stanford University, drove the final spike in the railroad that joined east and west. Nothing would ever be the same again. This event was not merely local, but also, as Emerson had surmised, global. Like the California gold and Nevada silver spike that leland had driven to join the rails, the material transportation network and immaterial communication network intersected at that moment to create what Rebecca Solnit correctly identifies as “the first live national media event.” The spike “had been wired to connect to the telegraph lines that ran east and west along the railroad tracks. The instant Stanford struck the spike, a signal would go around the nation. . . . The signal set off cannons in San Francisco and New York. In the nation’s capital the telegraph signal caused a ball to drop, one of the balls that visibly signaled the exact time in observatories in many places then (of which the ball dropped in New york’s Times Square at the stroke of the New year is a last relic). The joining of the rails would be heard in every city equipped with fire-alarm telegrams, in Philadelphia, omaha, Buffalo, Chicago, and Sacramento. Celebrations would be held all over the nation.” This carefully orchestrated spectacle, which was made possible by the convergence of multiple national networks, was worthy of the future Hollywood and the technological wizards of Silicon Valley whose relentless innovation Stanford’s university would later nourish. What most impressed people at the time was the speed of global communication, which now is taken for granted.

Flickering Images—Changing Minds

Industrialization not only changes systems of production and distribution of commodities and products, but also imposes new disciplinary practices that transform bodies and change minds. During the early years of train travel, bodily acceleration had an enormous psychological effect that some people found disorienting and others found exhilarating. The mechanization of movement created what ann Friedberg describes as the “mobile gaze,” which transforms one’s surroundings and alters both the content and, more important, the structure, of perception. This mobile gaze takes two forms: the person can move and the surroundings remain immobile (train, bicycle, automobile, airplane, elevator), or the person can remain immobile and the surroundings move (panorama, kinetoscope, film).

When considering the impact of trains on the mobilization of the gaze, it is important to note that different designs for railway passenger cars had different perceptual and psychological effects. Early European passenger cars were modeled on stagecoaches in which individuals had seats in separate compartments; early american passenger cars, by contrast, were modeled on steamboats in which people shared a common space and were free to move around. The European design tended to reinforce social and economic hierarchies that the american design tried to break down. Eventually, american railroads adopted the European model of fixed individual seating but had separate rows facing in the same direction rather than different compartments. As we will see, the resulting compartmentalization of perception anticipates the cellularization of attention that accompanies today’s distributed high-speed digital networks.

During the early years, there were numerous accounts of the experience of railway travel by ordinary people, distinguished writers, and even physicians, in which certain themes recur. The most common complaint is the sense of disorientation brought about by the experience of unprecedented speed. There are frequent reports of the dispersion and fragmentation of attention that are remarkably similar to contemporary personal and clinical descriptions of attention-deficit hyperactivity disorder (ADHD). With the landscape incessantly rushing by faster than it could be apprehended, people suffered overstimulation, which created a sense of psychological exhaustion and physical distress. Some physicians went so far as to maintain that the experience of speed caused “neurasthenia, neuralgia, nervous dyspepsia, early tooth decay, and even premature baldness.”

In 1892, Sir James Crichton-Browne attributed the significant increase in the mortality rate between 1859 and 1888 to “the tension, excitement, and incessant mobility of modern life.” Commenting on these statistics, Max Nordau might well be describing the harried pace of life today. “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through the window of the flying express, sets in activity our sensory nerves and our brain centers. Even the little shocks of railway travelling, not perceived by consciousness, the perpetual noises and the various sights in the streets of a large town, our suspense pending the sequel of progressing events, the constant expectation of the newspaper, of the postman, of visitors, cost our brains wear and tear.” During the years around the turn of the last century, a sense of what Stephen kern aptly describes as “cultural hypochondria” pervaded society. Like today’s parents concerned about the psychological and physical effects of their kids playing video games, nineteenth-century physicians worried about the effect of people sitting in railway cars for hours watching the world rush by in a stream of images that seemed to be detached from real people and actual things.

In addition to the experience of disorientation, dispersion, fragmentation, and fatigue, rapid train travel created a sense of anxiety. People feared that with the increase in speed, machinery would spin out of control, resulting in serious accidents. An 1829 description of a train ride expresses the anxiety that speed created. “It is really flying, and it is impossible to divest yourself of the notion of instant death to all upon the least accident happening.” a decade and a half later, an anonymous German explained that the reason for such anxiety is the always “close possibility of an accident, and the inability to exercise any influence on the running of the cars.” When several serious accidents actually occurred, anxiety spread like a virus. Anxiety, however, is always a strange experience—it not only repels, it also attracts; danger and the anxiety it brings are always part of speed’s draw.

Perhaps this was a reason that not everyone found trains so distressing. For some people, the experience of speed was “dreamlike” and bordered on ecstasy. In 1843, Emerson wrote in his Journals, “Dreamlike travelling on the railroad. The towns which I pass between Philadelphia and New york make no distinct impression. They are like pictures on a wall.” The movement of the train creates a loss of focus that blurs the mobile gaze. A few years earlier, Victor Hugo’s description of train travel sounds like an acid trip as much as a train trip. In either case, the issue is speed. “The flowers by the side of the road are no longer flowers but flecks, or rather streaks, of red or white; there are no longer any points, everything becomes a streak; grain fields are great shocks of yellow hair; fields of alfalfa, long green tresses; the towns, the steeples, and the trees perform a crazy mingling dance on the horizon; from time to time, a shadow, a shape, a specter appears and disappears with lightning speed behind the window; it’s a railway guard.” The flickering images fleeting past train windows are like a film running too fast to comprehend.

Transportation was not the only thing accelerating in the nineteenth century—the pace of life itself was speeding up as never before. listening to the whistle of the train headed to Boston in his cabin beside Walden Pond, Thoreau mused, “The startings and arrivals of the cars are now the epochs in the village day. They go and come with such regularity and precision, and their whistle can be heard so far, that the farmers set their clocks by them, and thus one well conducted institution regulates a whole country. Have not men improved somewhat in punctuality since the railroad was invented? Do they not talk and think faster in the depot than they did in the stage office? There is something electrifying in the atmosphere of the former place. I have been astonished by some of the miracles it has wrought.” And yet Thoreau, more than others, knew that these changes also had a dark side.

The transition from agricultural to industrial capitalism brought with it a massive migration from the country, where life was slow and governed by natural rhythms, to the city, where life was fast and governed by mechanical, standardized time. The convergence of industrialization, transportation, and electrification made urbanization inevitable. The faster that cities expanded, the more some writers and poets idealized rustic life in the country. Nowhere is such idealization more evident than in the writings of British romantics. The rapid swirl of people, machines, and commodities created a sense of vertigo as disorienting as train travel. Wordsworth writes in The Prelude,

oh, blank confusion! True epitome
of what the mighty City is herself
To thousands upon thousands of her sons, living among the same perpetual whirl
of trivial objects, melted and reduced
To one identity, by differences
That have no law, no meaning, no end—

By 1850, fifteen cities in the United States had a population exceeding 50,000. New york was the largest (1,080,330), followed by Philadelphia (565,529), Baltimore (212,418), and Boston (177,840). Increasing domestic trade that resulted from the railroad and growing foreign trade that accompanied improved ocean travel contributed significantly to this growth. While commerce was prevalent in early cities, manufacturing expanded rapidly during the latter half of the eighteenth century. The most important factor contributing to nineteenth-century urbanization was the rapid development of the money economy. Once again, it is a matter of circulating flows, not merely of human bodies but of mobile commodities. Money and cities formed a positive feedback loop—as the money supply grew, cities expanded, and as cities expanded, the money supply grew.

The fast pace of urban life was as disorienting for many people as the speed of the train. In his seminal essay “The Metropolis and Mental life,” Georg Simmel observes, “The psychological foundation upon which the metropolitan individuality is erected, is the intensification of emotional life due to the swift and continuous shift of external and internal stimuli. Man is a creature whose existence is dependent on differences, i.e., his mind is stimulated by the difference between present impressions and those which have preceded. . . . To the extent that the metropolis creates these psychological conditions—with every crossing of the street, with the tempo and multiplicity of economic, occupational and social life—it creates the sensory foundations of mental life, and in the degree of awareness necessitated by our organization as creatures dependent on differences, a deep contrast with the slower, more habitual, more smooth flowing rhythm of the sensory-mental phase of small town and rural existence.” The expansion of the money economy created a fundamental contradiction at the heart of metropolitan life. On the one hand, cities brought together different people from all backgrounds and walks of life, and on the other hand, emerging industrial capitalism leveled these differences by disciplining bodies and programming minds. “Money,” Simmel continues, “is concerned only with what is common to all, i.e., with the exchange value which reduces all quality and individuality to a purely quantitative level.” The migration from country to city that came with the transition from agricultural to industrial capitalism involved a shift from homogeneous communities to heterogeneous assemblages of different people, qualitative to quantitative methods of assessment and evaluation, as well as concrete to abstract networks of exchange of goods and services, and a slow to fast pace of life. I will consider further aspects of these disciplinary practices in Chapter 3; for now, it is important to understand the implications of the mechanization or industrialization of perception.

I have already noted similarities between the experience of looking through a window on a speeding train to the experience of watching a film that is running too fast. During the latter half of the nineteenth century a remarkable series of inventions transformed not only what people experienced in the world but how they experienced it: photography (Louis-Jacques-Mandé Daguerre, ca. 1837), the telegraph (Samuel F. B. Morse, ca. 1840), the stock ticker (Thomas alva Edison, 1869), the telephone (alexander Graham Bell, 1876), the chronophotographic gun (Étienne-Jules Maney, 1882), the kinetoscope (Edison, 1894), the zoopraxiscope (Eadweard Muybridge, 1893), the phantoscope (Charles Jenkins, 1894), and cinematography (Auguste and Louis Lumière, 1895). The way in which human beings perceive and conceive the world is not hardwired in the brain but changes with new technologies of production and reproduction.

Just as the screens of today’s TVs, computers, video games, and mobile devices are restructuring how we process experience, so too did new technologies at the end of the nineteenth century change the world by transforming how people apprehended it. While each innovation had a distinctive effect, there is a discernible overall trajectory to these developments. Industrial technologies of production and reproduction extended processes of dematerialization that eventually led first to consumer capitalism and then to today’s financial capitalism. The crucial variable in these developments is the way in which material and immaterial networks intersect to produce a progressive detachment of images, representations, information, and data from concrete objects and actual events. Marveling at what he regarded as the novelty of photographs, Oliver Wendell Holmes commented, “Form is henceforth divorced from matter. In fact, matter as a visible object is of no great use any longer, except as the mould on which form is shaped. Give us a few negatives of a thing worth seeing, taken from different points of view, and that is all we want of it. Pull it down or burn it up, if you please. . . . Matter in large masses must always be fixed and dear, form is cheap and transportable. We have got the fruit of creation now, and need not trouble ourselves about the core.”

Technologies for the reproduction and transmission of images and information expand the process of abstraction initiated by the money economy to create a play of freely floating signs without anything to ground, certify, or secure them. With new networks made possible by the combination of electrification and the invention of the telegraph, telephone, and stock ticker, communication was liberated from the strictures imposed by physical means of conveyance. In previous energy regimes, messages could be sent no faster than people, horses, carriages, trains, ships, or automobiles could move. Dematerialized words, sounds, information, and eventually images, by contrast, could be transmitted across great distances at high speed. With this dematerialization and acceleration, Marx’s prediction—that “everything solid melts into air”—was realized. But this was just the beginning. It would take more than a century for electrical currents to become virtual currencies whose transmission would approach the speed limit.

Excerpted from “Speed Limits: Where Time Went and Why We Have So Little Left,” by Mark C. Taylor, published October 2014 by Yale University Press. Copyright ©2014 by Mark C. Taylor. Reprinted by permission of Yale University Press.

http://www.salon.com/2014/10/19/the_end_of_the_old_world_how_technology_shrunk_america_forever/?source=newsletter

Google makes us all dumber

…the neuroscience of search engines

As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions

Google makes us all dumber: The neuroscience of search engines
(Credit: Ollyy via Shutterstock)

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection.

This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent.

Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.”

Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear.

I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

The Internet can make us feel omniscient. But it’s the feeling of not knowing which inspires the desire to learn. The psychologist George Loewenstein gave us the simplest and most powerful definition of curiosity, describing it as the response to an “information gap.” When you know just enough to know that you don’t know everything, you experience the itch to know more. Loewenstein pointed out that a person who knows the capitals of three out of 50 American states is likely to think of herself as knowing something (“I know three state capitals”). But a person who has learned the names of 47 state capitals is likely to think of herself as not knowing three state capitals, and thus more likely to make the effort to learn those other three.



That word “effort” is important. It’s hardly surprising that we love the ease and fluency of the modern web: our brains are designed to avoid anything that seems like hard work. The psychologists Susan Fiske and Shelley Taylor coined the term “cognitive miser” to describe the stinginess with which the brain allocates limited attention, and its in-built propensity to seek mental short-cuts. The easier it is for us to acquire information, however, the less likely it is to stick. Difficulty and frustration — the very friction that Google aims to eliminate — ensure that our brain integrates new information more securely. Robert Bjork, of the University of California, uses the phrase “desirable difficulties” to describe the counterintuitive notion that we learn better when the learning is hard. Bjork recommends, for instance, spacing teaching sessions further apart so that students have to make more effort to recall what they learned last time.

A great question should launch a journey of exploration. Instant answers can leave us idling at base camp. When a question is given time to incubate, it can take us to places we hadn’t planned to visit. Left unanswered, it acts like a searchlight ranging across the landscape of different possibilities, the very consideration of which makes our thinking deeper and broader. Searching for an answer in a printed book is inefficient, and takes longer than in its digital counterpart. But while flicking through those pages your eye may alight on information that you didn’t even know you wanted to know.

The gap between question and answer is where creativity thrives and scientific progress is made. When we celebrate our greatest thinkers, we usually focus on their ingenious answers. But the thinkers themselves tend to see it the other way around. “Looking back,” said Charles Darwin, “I think it was more difficult to see what the problems were than to solve them.” The writer Anton Chekhov declared, “The role of the artist is to ask questions, not answer them.” The very definition of a bad work of art is one that insists on telling its audience the answers, and a scientist who believes she has all the answers is not a scientist.

According to the great physicist James Clerk Maxwell, “thoroughly conscious ignorance is the prelude to every real advance in science.” Good questions induce this state of conscious ignorance, focusing our attention on what we don’t know. The neuroscientist Stuart Firestein teaches a course on ignorance at Columbia University, because, he says, “science produces ignorance at a faster rate than it produces knowledge.” Raising a toast to Einstein, George Bernard Shaw remarked, “Science is always wrong. It never solves a problem without creating ten more.”

Humans are born consciously ignorant. Compared to other mammals, we are pushed out into the world prematurely, and stay dependent on elders for much longer. Endowed with so few answers at birth, children are driven to question everything. In 2007, Michelle Chouinard, a psychology professor at the University of California, analyzed recordings of four children interacting with their respective caregivers for two hours at a time, for a total of more than two hundred hours. She found that, on average, the children posed more than a hundred questions every hour.

Very small children use questions to elicit information — “What is this called?” But as they grow older, their questions become more probing. They start looking for explanations and insight, to ask “Why?” and “How?”. Extrapolating from Chouinard’s data, the Harvard professor Paul Harris estimates that between the ages of 3 and 5, children ask 40,000 such questions. The numbers are impressive, but what’s really amazing is the ability to ask such a question at all. Somehow, children instinctively know there is a vast amount they don’t know, and they need to dig beneath the world of appearances.

In a 1984 study by British researchers Barbara Tizard and Martin Hughes, four-year-old girls were recorded talking to their mothers at home. When the researchers analyzed the tapes, they found that some children asked more “How” and “Why” questions than others, and engaged in longer passages of “intellectual search” — a series of linked questions, each following from the other. (In one such conversation, four-year-old Rosy engaged her mother in a long exchange about why the window cleaner was given money.) The more confident questioners weren’t necessarily the children who got more answers from their parents, but the ones who got more questions. Parents who threw questions back to their children — “I don’t know, what do you think?” — raised children who asked more questions of them. Questioning, it turn out, is contagious.

Childish curiosity only gets us so far, however. To ask good questions, it helps if you have built your own library of answers. It’s been proposed that the Internet relieves us of the onerous burden of memorizing information. Why cram our heads with facts, like the date of the French revolution, when they can be summoned up in a swipe and a couple of clicks? But knowledge doesn’t just fill the brain up; it makes it work better. To see what I mean, try memorizing the following string of fourteen digits in five seconds:

74830582894062

Hard, isn’t it? Virtually impossible. Now try memorizing this string of fourteen letters:

lucy in the sky with diamonds

This time, you barely needed a second. The contrast is so striking that it seems like a completely different problem, but fundamentally, it’s the same. The only difference is that one string of symbols triggers a set of associations with knowledge you have stored deep in your memory. Without thinking, you can group the letters into words, the words into a sentence you understand as grammatical — and the sentence is one you recognize as the title of a song by the Beatles. The knowledge you’ve gathered over years has made your brain’s central processing unit more powerful.

This tells us something about the idea we should outsource our memories to the web: it’s a short-cut to stupidity. The less we know, the worse we are at processing new information, and the slower we are to arrive at pertinent inquiry. You’re unlikely to ask a truly penetrating question about the presidency of Richard Nixon if you have just had to look up who he is. According to researchers who study innovation, the average age at which scientists and inventors make breakthroughs is increasing over time. As knowledge accumulates across generations, it takes longer for individuals to acquire it, and thus longer to be in a position to ask the questions which, in Susan Sontag’s phrase, “destroy the answers”.

My argument isn’t with technology, but the way we use it. It’s not that the Internet is making us stupid or incurious. Only we can do that. It’s that we will only realize the potential of technology and humans working together when each is focused on its strengths — and that means we need to consciously cultivate effortful curiosity. Smart machines are taking over more and more of the tasks assumed to be the preserve of humans. But no machine, however sophisticated, can yet be said to be curious. The technology visionary Kevin Kelly succinctly defines the appropriate division of labor: “Machines are for answers; humans are for questions.”

The practice of asking perceptive, informed, curious questions is a cultural habit we should inculcate at every level of society. In school, students are generally expected to answer questions rather than ask them. But educational researchers have found that students learn better when they’re gently directed towards the lacunae in their knowledge, allowing their questions bubble up through the gaps. Wikipedia and Google are best treated as starting points rather than destinations, and we should recognize that human interaction will always play a vital role in fueling the quest for knowledge. After all, Google never says, “I don’t know — what do you think?”

The Internet has the potential to be the greatest tool for intellectual exploration ever invented, but only if it is treated as a complement to our talent for inquiry rather than a replacement for it. In a world awash in ready-made answers, the ability to pose difficult, even unanswerable questions is more important than ever.

Picasso was half-right: computers are useless without truly curious humans.

Ian Leslie is the author of “Curious: The Desire To Know and Why Your Future Depends On It.” He writes on psychology, trends and politics for The Economist, The Guardian, Slate and Granta. He lives in London. Follow him on Twitter at @mrianleslie.

http://www.salon.com/2014/10/12/google_makes_us_all_dumber_the_neuroscience_of_search_engines/?source=newsletter

David Lowery: Here’s how Pandora is destroying musici

 Cracker and Camper van Beethoven’s David Lowery tells Salon how streaming services might end true avant garde music

David Lowery: Here's how Pandora is destroying musicians
David Lowery (Credit: davidlowerymusic.com/Jason Thrasher)

David Lowery has become both beloved and notorious over the last year as one of the musicians most critical of the ways musicians are paid in the digital era. The Camper van Beethoven and Cracker singer brings an artist’s rage and a quant’s detached rigor to his analysis of the music business.

He’s currently fired up about a federal lawsuit filed in New York in which several record labels have sued Pandora (and before that, Sirius FM) for neglecting to pay royalties for songs recorded before Feb. 15, 1972. Here’s how Billboard summarizes the suit: “The labels say both digital music services take advantage of a copyright loophole, since the master recording for copyright wasn’t created federally until 1972. … But the labels claim that their master recordings are protected by individual state copyright laws and therefore deserve royalty payments.”

Lowery thinks the loophole provides a way for Pandora to simply not pay older musicians for their work — while profiting from it themselves. The case could get bigger and change in strange ways, with broad implications.

And he’s similarly frustrated with the rise of streaming services, which are in part owned by the major labels. “For us, it’s the worst-case scenario,” he says. “The old boss and the new boss have joined hands, they’re singing ‘Kumbaya,’ and they’ve changed the words to, ‘Fuck the songwriters! Fuck the performers!’ ”

We spoke to Lowery from a studio in Wisconsin, where he was recording a new Cracker record.

There’s a sort of complicated and technical case in New York right now, involving musicians’ royalties from before 1972: It’s a lawsuit that the general public doesn’t know that much about, but it’s important for musicians, especially for older musicians. Tell us what’s going on.

Back in 1971, there was a series of legislative actions. Before 1972, copyrights for the sound recording weren’t federal, they were [handled at the state level]. So we had some copyright reforms in the ‘70s, which adjusts for technology and things like that. They basically created a federal copyright for sound recordings. And for many, many years people just had assumed — and many of these services had acted as if — the intention of the act was to federalize all sound recordings, not really making a distinction in 1972. But somehow, in the last few years, probably starting in 2009, a few of the digital services have decided that there is no federal copyright for sound recordings created before 1972 — so they’ve just stopped paying these artists.



That includes a lot of legacy artists, like Otis Redding, Aretha Franklin — the writer and main performer of “Respect.” So you have these services that — not all of them, but some of them — just decided that they weren’t going to pay royalties on this. The general public might look at this and go, “This is just companies, and this is how they work, and they try to save money, and so they’re just doing what they can do.”

“They’re just doing what corporations always do.”

They’re just trying to minimize their expenses and stuff like that … But if you really look at this, you’ll see that it’s much, much more complicated than that. They’re making a very weird argument, right? Because ultimately, they lose either way.

The digital services, so Pandora, Sirius, Clear Channel, Digital Operations, whatever they may be. It’s not really clear — it’s definitely Sirius and Pandora — but it’s not really clear which other ones are there. But it’s a strange argument because they lose either way. Because if it’s not covered by federal law then it’s covered by state law. So if they win, and it’s covered by state law and suddenly these very large companies need a license from each individual state, essentially. Which would require them to negotiate with each copyright owner individually. And so there are a lot of people scratching their heads on this one, because why would they pursue a strategy like this? They lose either way. And they could lose really big on this.

So you look at this stuff and like a lot of things that happen with companies that are Wall Street-backed, there’s an incentive to keep the stock price high. And certainly in the case of Pandora — they’re kind of my bête noire, but you know, I feel like they deserve it — but you wonder if a lot of the time these kind of moves, they’re just sort of designed to keep the stock price high in the short-term. And in the long-term they’re creating these enormous liabilities that will just … They’re not only screwing song owners, to me this is one of the most important issues that I’ve come across since I’ve been advocating for artists’ rights. Because it ends up not only screwing songwriters but it could create these huge liabilities that ultimately cost pensions, and little old ladies their savings and stuff like that.

You say it could contribute to these digital-music companies collapsing? Because there’s been a lot of speculation that webcasters don’t have the business model that allows them to earn profits. There’s been speculation that they won’t be around along despite the conventional wisdom that they are saving the music business.

Exactly, and that’s kind of what I’m getting at; in a way, this is much bigger than songwriters’ rights. They don’t really win either way, in my opinion. I mean, yeah, it’s possible that they eke out some kind of financial advantage, but if federal law did not federalize sound recording copyrights, then we revert to state law. And that’s going to be a nightmare for everybody; it’s going to be a nightmare for artists, even your old AM/FM radio station.

Another funny thing: We are one of the only democracies in the modern world that doesn’t pay royalties to performers on terrestrial radio. We’re one of six countries in the world, and the only modern democracy, that doesn’t pay performers royalties for getting played on the radio. I’m a songwriter, too, so I get royalties as a songwriter, but I don’t necessarily get royalties as a performer for terrestrial radio. Anyway, to me, this is just corporate sleaziness. It’s, “We’re going to fight this case that we’re going to lose, to basically save 6 or 10 percent of our expenses, and stick our shareholders, possibly, with these huge liabilities down the road.” Because if they create the situation by which they do not have the copyrights for thousands of songs that they’re streaming, theoretically, they could be charged $150,000 in damages each time it plays one of these songs. So that’s the story that goes all the way down in the weeds of what is going on.

You’re saying this could be a real time bomb.

Yes.

Let’s go back to the artists for a second. I think a lot of consumers might look at this and say, “Well, the Beatles and the Stones don’t need more royalties, and Otis Redding is dead. Why does this matter? Who’s really going to suffer if just songs from before 1972 don’t produce royalties for the artists?”

Well, yeah, that’s what Chris Harrison from Pandora said. I think he said something like that, “These people never expected to get royalties.” I mean, really? Plenty of those artists are not rich, you know? I just saw Wanda Jackson play —she’s almost 80 and she’s out touring. And she made these iconic rock ‘n’ roll recordings.

Some of the first rockabilly records.

I mean, if Pandora is going to stream these things and if Sirius is going to broadcast these things, why shouldn’t they get paid? We’re America, we’re a fair country. We’re not a country like China, where we just go, “Here’s a politically well-connected elite, we’re just going to hand them the rights to something that somebody created.” Just so the politically well-connected can get richer. It’s really funny to me — look, I’m not really a lefty or liberal, I’m basically a little right of center in my politics — and it’s just funny to see consumers sort of rallying around the rights of corporations and against the rights of individuals.

Well, that is what’s happening.

It is! It would have been like the students in the late ‘60s and early ‘70s protesting for the war. Or for the defense contractors … You know what I mean? “We still need that rice from the Mekong Delta. We need cheap rice from the Mekong Delta, let’s protest against these draft dodgers.” On behalf of … I don’t know—

Dow Chemical or something.

That’s literally what the public is doing now. I’ve said this before, and I don’t think people quite get it.

The Internet has become cargo cult. People worship the Internet like a cargo cult. It’s this thing that they have that brings them free stuff, and they think it’s magic. It’s beyond rational thought and reason, right? And they have no sense that behind all that free stuff are the drowned ships and sailors. They don’t want to hear that behind the way you get this free stuff, some really actually fucked-up things have happened to individuals and their individual rights.

And that there are people getting rich off this stuff. Look, people used to go crazy and you’d always hear people talk about how the record labels were so bad to artists back in the ‘50s. They paid them really minimal royalties and stuff like that. But look, these guys are even worse. It’s way, way worse.

Well, let’s extend that a little bit. Since the last time we spoke, it seems like there’s been a dozen new streaming services launched. And streaming is now discussed as the savior of the record industry. We have a new Amazon service, Google has announced one, and Beats service was bought by Apple. There’s surely going to be others by the end of the month. Do these new services seem to be, from an artist’s point of view, an improvement? Or do we just not know?

Well, it’s going to depend on what kind of artist you are. First of all, let’s just take that face-value statement, that streaming will save the music industry. Well, it will if the music business is the kind of music business that’s basically just built around Top 40 songs.

Blockbuster artists.

If you don’t want to ever have Captain Beefheart and Miles Davis and — one of my favorite bands — the gloom-stoner, doom-metal band Sleep. If you don’t ever expect to have those kind of bands anymore. And the reason is because streaming flattens and commoditizes the spin. So you just have one price for every spin of a song across the entire spectrum, whether it’s some kind of avant-garde classical work or whether it’s a Miley Cyrus song. So that will work if you have lots and lots of spins. But it won’t work if you have just a few spins. So what that will do is push out — and you already see that happening — it will push out any sort of niche or, you know …

Any specialty genres.

Specialty genres. Because people might have gone into the stores and gone, “Well, all the albums are between $9.99 and $17.99, they sort of all hover around $12.99, or whatever. It’s always been that way.” Well, yes and no, because something like a Miley Cyrus song might get spun a whole bunch — you might play that record a whole bunch until you’re sick of it whereas an Art Blakey record you might play four times a year. Those, in effect, were more expensive, and when you look at the normal, real, non-magical unicorn part of the economy, niche products cost a lot more than mass-market products.

Maybe we could look at food: Fast food costs less, going to the farmer’s market costs more. But people have decided, increasingly, that it’s worth paying a little more for healthier, fresher, local, whatever food. What you’re saying, I think, that the economic structure of streaming means that everybody’s —

Everything is the same price.

Well, there’s no incentive to make anything besides mass-market —

The most mass-market stuff, exactly. It’s as if all T-shirts — my analogy is like it’s as if the government mandated that all T-shirts were going to cost $3. We would all be wearing semi-ironic, American flag T-shirts from Wal-Mart because nobody would make anything else. Because it has to appeal to the mass market. And yeah, you may not see it right now, but I don’t know what you’ll see 20 years from now. Maybe other systems will come up to fix it but I don’t think it bodes very well for anything other than the most mass-market kind of music.

Anyway, since when does the federal government basically step in and say, “You entire class of people who do this one thing — people who write poetry to music — this one class of Americans who write songs. We’re going to make it so that your songs have to appear on these services. You can’t really get out. You have to sell these songs on your services.” It’s a weird thing we’ve done as a country.

You’re unusual in some ways in your sentiments. A lot of the people fighting for artists’ rights are on the political left. Your argument, I think, is that what we have now is a kind of unpleasant combination of the marketplace and government regulation — kind of a worst of both worlds?

Yeah, it’s like some sort of corporate socialism, yeah. We basically mandate that individuals give their songs to these companies. I really feel like this is a simple problem to fix. There should just be an opt-out. You should just be able to serve notice with the copyright office that six months in advance, as of 2015, I’m the owner of these songs, I am opting out of all of these services.

And why can’t musicians opt out so easily?

There’s no way for songwriters really to opt out. There have been a couple of people who have pulled these really weird tricks where essentially their songs are not really published so therefore, they’re sort of not public and then they forgo performance fees but that’s really complicated, how they did that.

Performers, if you own your own recording, you can opt out of streaming services which are on-demand, but you can’t opt out of webcasting services which are not quite on-demand. You can opt out of Spotify but not Pandora. You can opt out of Spotify on the on-demand side, but you can’t opt out on the — you know how they have a Pandora-like radio service too? Your songs will still be played in there.

As a performer, you have this really narrow place where you can opt out. But as a songwriter that’s not possible anywhere.

Right. And if you have a deal with the label it’s even more complicated …

Yeah, because the label will just put your stuff in there. But I want to tell you this. I know for a fact that one of the heads of one of the major labels is freaking out on streaming and realizing that what his/her underlings told them about what was going to happen with streaming is not in fact true. And they are very pissed off about that. I can’t disclose my source, but they’re one of the major labels. They completely have buyer’s remorse right now. In fact, you could describe them as being in emergency management mode right now over what they’re going to do about streaming because of the streaming revenues. Because streaming is clearly cutting their sales but it’s not making up the difference in revenues. So even for the record labels — I mean, it’s terrible for artists, but even the record labels are realizing they have fucked themselves; at least one of the major labels has realized that they fucked themselves.

Which, actually, I take some delight in. I can’t help it. They got into this.

Because the deals are opaque, we’ve had to speculate, and I guess we still have to speculate on what the deals between the streaming services and the labels were. That isn’t public so we don’t know what kind of sweetheart deals were made between them. We do know that the artists have been largely left out of the process.

Let’s look at it this way. Say we own an apartment together, and we’re going to split whatever money we make off this apartment when we rent it out to somebody. But I go out to this renter and I say, “I tell you what, instead of you giving me $1,500 a month for this little studio apartment, we’ll charge you $750 rent but you basically give me $8,000 per year personally off the book and I’ll give you this cheap rent under the table.”

And then you’re splitting with me just that $750 and keeping the eight grand for myself. That’s what happened when the record labels traded equity for lower royalty rates. And I don’t know how long it’ll take, but there will be a class action eventually over that, but it may be too late.

Is it your sense that the streaming services will survive? There’s some worry that most of them haven’t turned a profit and that they don’t have a working business model.

I think they’ll survive but they’ll be part of Apple, part of Google, part of Amazon. They’ll be part of other services that make money in other ways. I think the same sense for the webcasters too as well. I just don’t see how they can really get the ship righted. They’ll need to charge more for their services.

On the other hand, I’m not necessarily against the streaming services. I think something like Spotify is useful and it’s kind of a good deal under certain circumstances. If I put my sound recording of “Low,” and if it was only behind the paywall, the premium-paying wall, I would get more than a penny and a half per spin. So for that song, I think having it on Spotify makes a lot of sense — if it was behind the paywall. It’s just that I don’t want my entire catalog, the entire album, for free on the service.

And you don’t have a choice right now as to whether you do?

We don’t have a choice. There are technicalities and there are ways certain artists can remove their recordings but you have to not have a record deal and frankly, I was part of the first wave of indie musicians in the 1980s. We had our own label — Pitch Tent Records. We are one of the pioneers of indie rock. And, you know, I’ve had this happen before in my 30-year career of being an independent and being on my own label and a major label. Because sometimes frankly it’s like “I don’t want to do the promotion on my own record.”

There’s an advantage to being on a label sometimes. It’s just really interesting to me. I don’t really see labels totally going away. Some people say, “Well, the labels will figure it out, they’ll figure out when it makes sense for artists.” Some people on the record side of the business are like, “Well, when we aggregate all these rights together we’ll know the best way to exploit these recordings and these copyrights.” I don’t necessarily see that happening and that’s why I just feel like there should be a right for artists to opt out of these services.

We’ve spoken a little about the government. We’ve spoken a little about these big corporations — Google, Amazon — who either own streaming services or webcasters or whatever. Let’s bring it together for a second. Part of what we’re describing is a kind of monopoly capital. We do have part of the federal government that’s supposed to be on the lookout for monopoly behavior — the Department of Justice.

And they are. They’re very vigilant on that. They’ve put the songwriters under monopoly supervision since 1941! They completely have monopoly backwards.

I’m gonna do something that breaks the law right now. I’m a songwriter who has my own publishing company. I think all songwriters should hold out for 10 percent of revenue from Pandora. I urge all songwriters to hold out for 10 percent of revenue from Pandora. I have just violated the consent decree. I am in contempt of court. Someone arrest me!

Because the DOJ doesn’t let songwriters do that. We’re under anti-trust supervision. But look at the companies that we’re [supposedly] colluding against — against Pandora which is 77 percent of the market for streaming. We might collude against Google and YouTube, right? There’s nobody close to them on online video. Let’s see, Spotify is [huge] as far as streaming goes.

Basically, the federal government has monopoly backwards. So you have the monopolies getting together on Capitol Hill and calling for Congress to not only keep the consent decree, but to expand it. It’s pretty crazy. It’d be funny if it wasn’t Kafka-esque. 

Since Reagan, the Department of Justice has focused on what they see as defending consumers, keeping prices low — and they’ve gone pretty easy on big corporations, music and technology corporations included. Do you think the DOJ, for instance, will start paying attention to the effect Amazon and Google are having on the making of culture?

I think they will once somebody sues them and it goes to the Supreme Court. This is a thing I am very seriously considering. I think the consent decree acts as what’s called a writ of attainder. Because essentially, as soon as I write my first song, I’m guilty. There’s no court proceedings. I’m under Department of Justice supervision. There’s no court proceeding. There’s no legislation. My rights are limited by extrajudicial, extra-legislative [rules] … Our Founding Fathers were very, very, very much against this thing. I think the point is that somebody has to sue the Department of Justice for violation of our constitutional rights, and then they’ll stop.

I think it’ll have to go to court. If you look at it, if a judge really looks at it, they’ll go — essentially the way the consent decree works is that it’s a court case that’s been open since 1941. It hasn’t been closed. And as soon as I wrote a song, I’m part of that court case. I demonstrated the limitation of my rights by showing how I’m in contempt of court by saying I think songwriters should hold out for 10 percent for Pandora.

When did I ever get a hearing, right? I never got a hearing on that. When was the law ever passed? The judicial branch can’t make law? They’re making law, by that consent decree they’ve created essentially a statutory right for broadcasters to have our songs.

And really, people are like, “Songwriters, I understand they’re being screwed, but it’s just a small portion of Americans.” If they can do this to our songs, they can do this to your photos that you post on the Web. There’s a law, there are proposed laws that generally fall under the title “orphan works” for photographs that essentially would allow that.

Once people start thinking that, well, if songwriters songs can be collectivized for the good of these for-profit corporations without a trial or legislation or anything like that, they can do the same thing with what you write on your Facebook account or the photo you post on Twitter. You know what I’m saying? It’s eventually going to get to everybody.

 

Scott Timberg, a longtime arts reporter in Los Angeles who has contributed to the New York Times, runs the blog Culture Crash. His book, “Culture Crash: The Killing of the Creative Class” comes out in January. Follow him on Twitter at @TheMisreadCity

http://www.salon.com/2014/08/31/david_lowery_heres_how_pandora_is_destroying_musicians/?source=newsletter

Walter Isaacson: “Innovation” doesn’t mean anything anymore

The man who brought America inside the minds of Einstein, Franklin and Jobs takes issue with modern-day tech hype

Walter Isaacson: "Innovation" doesn't mean anything anymore
Walter Isaacson (Credit: Reuters/Fred Prouser)

If anybody in America understands genius, it’s Walter Isaacson.

The bestselling biographer has chronicled the lives of everyone from Benjamin Franklin and Albert Einstein to Henry Kissinger and (most recently) Steve Jobs. In the process, he has garnered a reputation as a writer deeply attuned to the idiosyncratic — and sometimes megalomaniacal — personalities and predilections of singularly brilliant men. But genius alone, as he would probably be the first to point out, actually isn’t enough to change the world.

We live in a time when technology companies — from Google and Apple to the burgeoning start-up community that’s taken Silicon Valley by storm — have staked a place at the center of the American culture. And the idea of innovation, how an idea translates from mind stuff into tangible reality, has consequently become shrouded in a mythology about genius and grit — brainiacs with a golden idea holed up in a dingy garage, working in obscurity before taking the world by storm — that is emotionally appealing but short on nuance. The truth of the matter, as Isaacson has pointed out, is that what makes for a genuine, world-changing innovation is much more complicated than a towering IQ. In reality, execution is everything.

Isaacson’s new book, “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution,” slated for release in October, explores how disruptive change really comes to fruition. Salon spoke with him earlier this summer about the nature of innovation, and how it’s often misunderstood. This interview has been edited for length and clarity.

First let’s start with something big: What do you think is the greatest issue facing our time?

Right now I think that inequality of opportunity. I think ever since the days of Benjamin Franklin the basic American philosophy – the basic American creed has been if you work hard and play by the rules you can be secure. And we’re losing that these days because of unequal education and because of unequal opportunities and inequalities in wealth.

And I would say that the most important element of that is creating better educational opportunities for all. I think that it used to be … well. I think that every kid should have a decent opportunity for a great education and we don’t have that at the moment.



I wanted to talk to you about how you define innovation because you have written about so many innovators. And the terms “innovation” and “innovator” are sort of the terms of our time. Everything is described in this way.

I think that the word “innovation” has become a buzzword and it’s been drained of much of its meaning because we overuse it.

For the past 12 years I’ve been working on a book about the people who actually invented the computer and the Internet. I put it aside to work on the Steve Jobs biography, but I went back to it, because I wanted to show how real innovators actually get something done.

So instead of trying to be philosophy or simple rules of innovation, I wanted to do it biographically to show you how people you may never have heard of, who invented the computer and Internet, actually came up with their ideas and executed them, because I think when we talk about innovation in the abstract, it loses its meaning.

Innovation comes from collaboration, it comes from teamwork, and it comes from being able to take visionary ideas and actually execute them with good engineering. And there’s no simple buzzword definition of innovation; I think it’s useful, as somebody who loves history, to just focus on real people and how they invented things, and that includes the computer, the Internet, but also the transistor, the microchip, search engines, and the World Wide Web. And there were real people who worked in teams and were able to execute on their ideas and I wanted to tell their stories, based on a dozen years of reporting what they did, instead of some abstract theory of innovation.

I began working on this book 12 years ago, mainly focused on the Internet, but after writing about Steve Jobs and interviewing people who’d been involved with the personal computer, I decided to make it a book about the intersection of digital networks and the personal computer, and how people in that field actually executed on their ideas. I’m not interested in how-to manuals about the philosophy of innovation. I’m interested in real people and how they actually succeeded or failed.

Like Ada Lovelace, who I’ve read that you feature prominently in your new book.

Yeah, Ada was the person who connected art to technology in the 1840s. Her father was the poet Lord Byron, her close friend was Charles Babbage, who invented the analytical engine, and she was able to understand how you could program a mechanical calculator to do more than just numbers. That it could weave patterns just like the punch cards that helped mechanical looms weave fabric, and that’s the first example in my next book of how real people went about connecting the arts and technology to new innovative things.

I start with her, but it goes all the way through the history of people we’ve never paid enough attention to because the people who invented the computer and the Internet were not lone inventors like an Edison or a Bell, in their labs saying, “Eureka!” They were teams of people who worked collaboratively, and so I think sometimes we underestimate … or sometimes we don’t fully appreciate the importance of collaborative creativity. So my book is not a theoretical book, but it’s just a history of the collaborations and teamwork that led to the computer, the Internet, the transistor, the microchip, Wikipedia, Google and other innovations.

In writing this book over the past 12 years, and in your other books on Albert Einstein and Steve Jobs and Ben Franklin, have you noticed any patterns or any similarities? Anything you start to pick up and think, “Oh well that’s very similar between these two, maybe that makes a good innovator”? You’ve mentioned the teams and the collaboration …

Yeah, there’s not one formula, which is fortunate for those of us that write biographies, otherwise you wouldn’t need a lot of biographies. Albert Einstein was much more of a loner, whereas Ben Franklin’s genius was bringing people together into teams. Steve Jobs’ genius was applying creativity and beauty to technology. But the one thing they had in common is they were all imaginative. They all questioned the conventional way of doing things. And as Einstein once said, imagination is more important than knowledge. And that’s sort of been a theme of all of my books.

 

 

http://www.salon.com/2014/08/05/walter_isaacson_innovation_doesnt_mean_anything_anymore/?source=newsletter

Your Cellphone Could Be a Major Health Risk

…and the Industry Could Be a Lot More Upfront About It

The science is becoming clearer: Sustained EMF exposure is dangerous.

Photo Credit: Jason Stitt/Shutterstock.com

The following is an excerpt from “Overpowered: What Science Tells Us About the Dangers of Cell Phones and Other Wifi-age Devices” by Martin Blank, PhD. Published by Seven Stories Press, March 2014. ISBN 978-1-60980-509-8. All rights reserved.

This excerpt was originally published by Salon.com.

You may not realize it, but you are participating in an unauthorized experiment—“the largest biological experiment ever,” in the words of Swedish neuro-oncologist Leif Salford. For the first time, many of us are holding high-powered microwave transmitters—in the form of cell phones—directly against our heads on a daily basis.

Cell phones generate electromagnetic fields (EMF), and emit electromagnetic radiation (EMR). They share this feature with all modern electronics that run on alternating current (AC) power (from the power grid and the outlets in your walls) or that utilize wireless communication. Different devices radiate different levels of EMF, with different characteristics.

What health effects do these exposures have?

Therein lies the experiment.

The many potential negative health effects from EMF exposure (including many cancers and Alzheimer’s disease) can take decades to develop. So we won’t know the results of this experiment for many years—possibly decades. But by then, it may be too late for billions of people.

Today, while we wait for the results, a debate rages about the potential dangers of EMF. The science of EMF is not easily taught, and as a result, the debate over the health effects of EMF exposure can get quite complicated. To put it simply, the debate has two sides. On the one hand, there are those who urge the adoption of a precautionary approach to the public risk as we continue to investigate the health effects of EMF exposure. This group includes many scientists, myself included, who see many danger signs that call out strongly for precaution. On the other side are those who feel that we should wait for definitive proof of harm before taking any action. The most vocal of this group include representatives of industries who undoubtedly perceive threats to their profits and would prefer that we continue buying and using more and more connected electronic devices.

This industry effort has been phenomenally successful, with widespread adoption of many EMF-generating technologies throughout the world. But EMF has many other sources as well. Most notably, the entire power grid is an EMF-generation network that reaches almost every individual in America and 75% of the global population. Today, early in the 21st century, we find ourselves fully immersed in a soup of electromagnetic radiation on a nearly continuous basis.

What we know

The science to date about the bioeffects (biological and health outcomes) resulting from exposure to EM radiation is still in its early stages. We cannot yet predict that a specific type of EMF exposure (such as 20 minutes of cell phone use each day for 10 years) will lead to a specific health outcome (such as cancer). Nor are scientists able to define what constitutes a “safe” level of EMF exposure.

However, while science has not yet answered all of our questions, it has determined one fact very clearly—all electromagnetic radiation impacts living beings. As I will discuss, science demonstrates a wide range of bioeffects linked to EMF exposure. For instance, numerous studies have found that EMF damages and causes mutations in DNA—the genetic material that defines us as individuals and collectively as a species. Mutations in DNA are believed to be the initiating steps in the development of cancers, and it is the association of cancers with exposure to EMF that has led to calls for revising safety standards. This type of DNA damage is seen at levels of EMF exposure equivalent to those resulting from typical cell phone use.

The damage to DNA caused by EMF exposure is believed to be one of the mechanisms by which EMF exposure leads to negative health effects. Multiple separate studies indicate significantly increased risk (up to two and three times normal risk) of developing certain types of brain tumors following EMF exposure from cell phones over a period of many years. One review that averaged the data across 16 studies found that the risk of developing a tumor on the same side of the head as the cell phone is used is elevated 240% for those who regularly use cell phones for 10 years or more. An Israeli study found that people who use cell phones at least 22 hours a month are 50% more likely to develop cancers of the salivary gland (and there has been a four-fold increase in the incidence of these types of tumors in Israel between 1970 and 2006). And individuals who lived within 400 meters of a cell phone transmission tower for 10 years or more were found to have a rate of cancer three times higher than those living at a greater distance. Indeed, the World Health Organization (WHO) designated EMF—including power frequencies and radio frequencies—as a possible cause of cancer.

While cancer is one of the primary classes of negative health effects studied by researchers, EMF exposure has been shown to increase risk for many other types of negative health outcomes. In fact, levels of EMF thousands of times lower than current safety standards have been shown to significantly increase risk for neurodegenerative diseases (such as Alzheimer’s and Lou Gehrig’s disease) and male infertility associated with damaged sperm cells. In one study, those who lived within 50 meters of a high voltage power line were significantly more likely to develop Alzheimer’s disease when compared to those living 600 meters or more away. The increased risk was 24% after one year, 50% after 5 years, and 100% after 10 years. Other research demonstrates that using a cell phone between two and four hours a day leads to 40% lower sperm counts than found in men who do not use cell phones, and the surviving sperm cells demonstrate lower levels of motility and viability.

EMF exposure (as with many environmental pollutants) not only affects people, but all of nature. In fact, negative effects have been demonstrated across a wide variety of plant and animal life. EMF, even at very low levels, can interrupt the ability of birds and bees to navigate. Numerous studies link this effect with the phenomena of avian tower fatalities (in which birds die from collisions with power line and communications towers). These same navigational effects have been linked to colony collapse disorder (CCD), which is devastating the global population of honey bees (in one study, placement of a single active cell phone in front of a hive led to the rapid and complete demise of the entire colony). And a mystery illness affecting trees around Europe has been linked to WiFi radiation in the environment.

There is a lot of science—highquality, peer-reviewed science—demonstrating these and other very troubling outcomes from exposure to electromagnetic radiation. These effects are seen at levels of EMF that, according to regulatory agencies like the Federal Communications Commission (FCC), which regulates cell phone EMF emissions in the United States, are completely safe.

An unlikely activist

I have worked at Columbia University since the 1960s, but I was not always focused on electromagnetic fields. My PhDs in physical chemistry from Columbia University and colloid science from the University of Cambridge provided me with a strong, interdisciplinary academic background in biology, chemistry, and physics. Much of my early career was spent investigating the properties of surfaces and very thin films, such as those found in a soap bubble, which then led me to explore the biological membranes that encase living cells.

I studied the biochemistry of infant respiratory distress syndrome (IRDS), which causes the lungs of newborns to collapse (also called hyaline membrane disease). Through this research, I found that the substance on the surface of healthy lungs could form a network that prevented collapse in healthy babies (the absence of which causes the problem for IRDS sufferers).

A food company subsequently hired me to study how the same surface support mechanism could be used to prevent the collapse of the air bubbles added to their ice cream. As ice cream is sold by volume and not by weight, this enabled the company to reduce the actual amount of ice cream sold in each package. (My children gave me a lot of grief about that job, but they enjoyed the ice cream samples I brought home.)

I also performed research exploring how electrical forces interact with the proteins and other components found in nerve and muscle membranes. In 1987, I was studying the effects of electric fields on membranes when I read a paper by Dr. Reba Goodman demonstrating some unusual effects of EMF on living cells. She had found that even relatively weak power fields from common sources (such as those found near power lines and electrical appliances) could alter the ability of living cells to make proteins. I had long understood the importance of electrical forces on the function of cells, but this paper indicated that magnetic forces (which are a key aspect of electromagnetic fields) also had significant impact on living cells.

Like most of my colleagues, I did not think this was possible. By way of background, there are some types of EMF that everyone had long acknowledged are harmful to humans. For example, X-rays and ultraviolet radiation are both recognized carcinogens. But these are ionizing forms of radiation. Dr. Goodman, however, had shown that even non-ionizingradiation, which has much less energy than X-rays, was affecting a very basic property of cells—the ability to stimulate protein synthesis.

Because non-ionizing forms of EMF have so much less energy than ionizing radiation, it had long been believed that non-ionizing electromagnetic fields were harmless to humans and other biological systems. And while it was acknowledged that a high enough exposure to non-ionizing EMF could cause a rise in body temperature—and that this temperature increase could cause cell damage and lead to health problems—it was thought that low levels of non-ionizing EMF that did not cause this rise in temperature were benign.

In over 20 years of experience at some of the world’s top academic institutions, this is what I’d been taught and this is what I’d been teaching. In fact, my department at Columbia University (like every other comparable department at other universities around the world) taught an entire course in human physiology without even mentioning magnetic fields, except when they were used diagnostically to detect the effects of the electric currents in the heart or brain. Sure magnets and magnetic fields can affect pieces of metal and other magnets, but magnetic fields were assumed to be inert, or essentially powerless, when it came to human physiology.

As you can imagine, I found the research in Dr. Goodman’s paper intriguing. When it turned out that she was a colleague of mine at Columbia, with an office just around the block, I decided to follow up with her, face-to-face. It didn’t take me long to realize that her data and arguments were very convincing. So convincing, in fact, that I not only changed my opinion on the potential health effects of magnetism, but I also began a long collaboration with her that has been highly productive and personally rewarding.

During our years of research collaboration, Dr. Goodman and I published many of our results in respected scientific journals. Our research was focused on the cellular level—how EMF permeate the surfaces of cells and affect cells and DNA—and we demonstrated several observable, repeatable health effects from EMF on living cells. As with all findings published in such journals, our data and conclusions were peer reviewed. In other words, our findings were reviewed prior to publication to ensure that our techniques and conclusions, which were based on our measurements, were appropriate. Our results were subsequently confirmed by other scientists, working in other laboratories around the world, independent from our own.

A change in tone

Over the roughly 25 years Dr. Goodman and I have been studying the EMF issue, our work has been referenced by numerous scientists, activists, and experts in support of public health initiatives including the BioInitiative Report, which was cited by the European Parliament when it called for stronger EMF regulations. Of course, our work was criticized in some circles, as well. This was to be expected, and we welcomed it—discussion and criticism is how science advances. But in the late 1990s, the criticism assumed a different character, both angrier and more derisive than past critiques.

On one occasion, I presented our findings at a US Department of Energy annual review of research on EMF. As soon as I finished my talk, a well-known Ivy League professor said (without any substantiation) that the data I presented were “impossible.” He was followed by another respected academic, who stated (again without any substantiation) that I had most likely made some “dreadful error.” Not only were these men wrong, but they delivered their comments with an intense and obvious hostility.

I later discovered that both men were paid consultants of the power industry—one of the largest generators of EMF. To me, this explained the source of their strong and unsubstantiated assertions about our research. I was witnessing firsthand the impact of private, profit-driven industrial efforts to confuse and obfuscate the science of EMF bioeffects.

Not the first time

I knew that this was not the first time industry opposed scientific research that threatened their business models. I’d seen it before many times with tobacco, asbestos, pesticides, hydraulic fracturing (or “fracking”), and other industries that paid scientists to generate “science” that would support their claims of product safety.

That, of course, is not the course of sound science. Science involves generating and testing hypotheses. One draws conclusions from the available, observable evidence that results from rigorous and reproducible experimentation. Science is not sculpting evidence to support your existing beliefs. That’s propaganda. As Dr. Henry Lai (who, along with Dr. Narendra Singh, performed the groundbreaking research demonstrating DNA damage from EMF exposure) explains, “a lot of the studies that are done right now are done purely as PR tools for the industry.”

An irreversible trend

Of course EMF exposure—including radiation from smart phones, the power lines that you use to recharge them, and the other wide variety of EMF-generating technologies—is not equivalent to cigarette smoking. Exposure to carcinogens and other harmful forces from tobacco results from the purely voluntary, recreational activity of smoking. If tobacco disappeared from the world tomorrow, a lot of people would be very annoyed, tobacco farmers would have to plant other crops, and a few firms might go out of business, but there would be no additional impact.

In stark contrast, modern technology (the source of the humanmade electromagnetic fields discussed here) has fueled a remarkable degree of innovation, productivity, and improvement in the quality of life. If tomorrow the power grid went down, all cell phone networks would cease operation, millions of computers around the world wouldn’t turn on, and the night would be illuminated only by candlelight and the moon—we’d have a lot less EMF exposure, but at the cost of the complete collapse of modern society.

EMF isn’t just a by-product of modern society. EMF, and our ability to harness it for technological purposes, is the cornerstone of modern society. Sanitation, food production and storage, health care—these are just some of the essential social systems that rely on power and wireless communication. We have evolved a society that is fundamentally reliant upon a set of technologies that generate forms and levels of electromagnetic radiation not seen on this planet prior to the 19th century.

As a result of the central role these devices play in modern life, individuals are understandably predisposed to resist information that may challenge the safety of activities that result in EMF exposures. People simply cannot bear the thought of restricting their time with— much less giving up—these beloved gadgets. This gives industry a huge advantage because there is a large segment of the public that would rather not know.

Precaution

My message is not to abandon gadgets—like most people, I too love and utilize EMF-generating gadgets. Instead, I want you to realize that EMF poses a real risk to living creatures and that industrial and product safety standards must and can be reconsidered. The solutions I suggest are not prohibitive. I recommend that as individuals we adopt the notion of “prudent avoidance,” minimizing our personal EMF exposure and maximizing the distance between us and EMF sources when those devices are in use. Just as you use a car with seat belts and air bags to increase the safety of the inherently dangerous activity of driving your car at a relatively high speed, you should consider similar risk-mitigating techniques for your personal EMF exposure.

On a broader social level, adoption of the Precautionary Principle in establishing new, biologically based safety standards for EMF exposure for the general public would be, I believe, the best approach. Just as the United States became the first nation in the world to regulate the production of chlorofluorocarbons (CFCs) when science indicated the threat to earth’s ozone layer—long before there was definitive proof of such a link—our governments should respond to the significant public health threat of EMF exposure. If EMF levels were regulated just as automobile carbon emissions are regulated, this would force manufacturers to design, create, and sell devices that generate much lower levels of EMF.

No one wants to return to the dark ages, but there are smarter and safer ways to approach our relationship—as individuals and across society—with the technology that exposes us to electromagnetic radiation.

Dr. Martin Blank is an expert on the health-related effects of electromagnetic fields and has been studying the subject for more than thirty years. He earned his first PhD from Columbia University in physical chemistry and his second from the University of Cambridge in colloid science. From 1968 to 2011, he taught as an associate professor at Columbia University, where he now acts as a special lecturer. Dr. Blank has served as an invited expert regarding EMF safety for Canadian Parliament, for the House Committee on Natural Resources and Energy (HNRE) in Vermont, and for Brazil’s Supreme Federal Court.

 

http://www.alternet.org/books/your-cellphone-could-be-major-health-risk-and-industry-could-be-lot-more-upfront-about-it?akid=11734.265072.RMLVql&rd=1&src=newsletter983753&t=7&paging=off&current_page=1#bookmark

Apparently you can’t be empathetic, or help the homeless, without a GoPro

Today in bad ideas: Strapping video cameras to homeless

people to capture “extreme living”

Today in bad ideas: Strapping video cameras to homeless people to capture "extreme living"

GoPro cameras are branded as recording devices for extreme sports, but a San Francisco-based entrepreneur had a different idea of what to do with the camera: Strap it to a homeless man and capture “extreme living.”

The project is called Homeless GoPro, and it involves learning the first-person perspective of homeless people on the streets of San Francisco. The website explains:

“With a donated HERO3+ Silver Edition from GoPro and a small team of committed volunteers in San Francisco, Homeless GoPro explores how a camera normally associated with extreme sports and other ’hardcore’ activities can showcase courage, challenge, and humanity of a different sort – extreme living.”

The intentions of the founder, Kevin Adler, seem altruistic. His uncle was homeless for 30 years, and after visiting his gravesite he decided to start the organization and help others who are homeless.

The first volunteer to film his life is a man named Adam, who has been homeless for 30 years, six of those in San Francisco. There are several edited videos of him on the organization’s site.

In one of the videos, titled “Needs,” Adam says, “I notice every day that people are losing their compassion and empathy — not just for homeless people — but for society in general. I feel like technology has changed so much — where people are emailing and don’t talk face to face anymore.”

Without knowing it Adam has critiqued the the entire project, which is attempting to use technology (a GoPro) to garner empathy and compassion. It is a sad reminder that humanity can ignore the homeless population in person on a day-to-day basis, and needs a video to build empathy. Viewers may feel a twinge of guilt as they sit removed from the situation, watching a screen.

According to San Francisco’s Department of Human Services‘ biennial count there were 6,436 homeless people living in San Francisco (county and city). “Of the 6,436 homeless counted,” a press release stated, “more than half (3,401) were on the streets without shelter, the remaining 3,035 were residing in shelters, transitional housing, resource centers, residential treatment, jail or hospitals.” The homeless population is subject to hunger, illness, violence, extreme weather conditions, fear and other physical and emotional ailments.



Empathy — and the experience of “walking a mile in somebody’s shoes” — are important elements of social change, and these documentary-style videos do give Adam a medium and platform to be a voice for the homeless population. (One hopes that the organization also helped Adam in other ways — shelter, food, a place to stay on his birthday — and isn’t just using him as a human tool in its project.) But something about the project still seems off.

It is in part because of the product placement. GoPro donated a $300 camera for the cause, which sounds great until you remember that it is a billion-dollar company owned by billionaire Nick Woodman. If GoPro wants to do something to help the Bay Area homeless population there are better ways to go about it than donate a camera.

As ValleyWag‘s Sam Biddle put it, “Stop thinking we can innovate our way out of one of civilization’s oldest ailments. Poverty, homelessness, and inequality are bigger than any app …”

 

http://www.salon.com/2014/04/17/today_in_bad_ideas_strapping_video_cameras_to_homeless_people_to_capture_extreme_living/?source=newsletter

Neil Young solving music snobs’ problems for $399

 

Quit complaining about your terrible MP3. Young takes his music genius digital to “restore the soul of music”

 

 

Neil Young solving music snobs' problems for $399

 

On March 12 music listeners who are dissatisfied with their iProduct or smartphone’s sound quality will have the chance to pony up $399 on Kickstarter for Neil Young’s PonoMusic. “It’s about the music, real music. We want to move digital music into the 21st century and PonoMusic does that,” Young said in the company’s release, “We couldn’t be more excited — not for ourselves, but for those that are moved by what music means in their lives.”

PonoMusic is not just a portable digital music player (PonoPlayer); it will also have an online music store (PonoMusic.com), where according to the makers you’ll be able to buy the “finest quality, highest-resolution digital music from both major labels and prominent independent labels, curated and archived for discriminating PonoMusic customers.”



The player is in the shape of a triangular prism, rather than the nearly flat, pocket-size design of most players. Its odd configuration allows it to rest on its side in a home or car. PonoPlayer can store between 100 and 500 high-resolution digital-music albums, depending on the size of the album, on its 128GB of memory. It also has an LCD touchscreen for “intuitive” navigating, and promises the highest fidelity of sound, as if you’re hearing it live. If you’re an audiophile, the device seems to bridge the gap between quality and convenience — with Neil Young’s stamp of approval.

http://www.salon.com/2014/03/10/neil_young_solving_music_snobs_problems_for_399/?source=newsletter