fred
PAN is a group of open minded but progressively biased individuals of various ages and backgrounds. We meet monthly in the Chicago area to discuss topical issues. We also enjoy film, theater, lectures and alcohol together on a semi-regular basis.
Want to know more? :: Contact : tysoe1@hotmail.com
!!NEW November 2007 Topic Survey Results
November 2006 Topic Survey Results
******************************************************
PAN Discussion Group Wednesday July 30th 2008
Subject: The Internet. How has it changed our lives?
Where is it going and what are other potential uses for it?
******************************************************
Location: Logan Square
Time: 7pm to 10pm - ish
Bring drinks and snacks to share
General:
The articles are the basis for the discussion and reading them helps give us some common ground and focus for the discussion, especially where we would otherwise be ignorant of the issues. The discussions are not intended as debates or arguments, rather they should be a chance to explore ideas and issues in a constructive forum. Feel free to bring along other stuff you've read on this, related subjects or on topics, especially topical ones, that the group might be interested in for future meetings.
GROUND RULES:
* Temper the urge to speak with the discipline to listen and leave space for others
* Balance the desire to teach with a passion to learn
* Hear what is said and listen for what is meant
* Marry your certainties with others' possibilities
* Reserve judgment until you can claim the understanding we seek
Any problems let me know...
847-985-7313
tysoe2@yahoo.com
The Articles:
****************************************************************
First a recent Atlantic article: Is Google Making us Stupid?
http://www.theatlantic.com/doc/200807/google.
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial »
brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets—reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)
For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.
I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?”
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”
Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits, conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:
It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows
“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler, Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”
As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”
The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.
When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.
The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts, its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.
Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.
About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake:
I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”
As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
**********************************************************
Next a review of a survey done by the Pew Trust ..
http://news.bbc.co.uk/2/hi/technology/5370688.stm
The internet will be a thriving, low-cost network of billions of devices by 2020, says a major survey of leading technology thinkers.
The Pew report on the future internet surveyed 742 experts in the fields of computing, politics and business. More than half of respondents had a positive vision of the net's future but 46% had serious reservations. Almost 60% said that a counter culture of Luddites would emerge, some resorting to violence.
The Pew Internet and American Life report canvassed opinions from the experts on seven broad scenarios about the future internet, based on developments in the technology in recent years.
Written responses
The correspondents were also able to qualify their answers with written responses giving more detail.
"Key builders of the next generation of internet often agree on the direction technology will change, but there is much less agreement about the social and political impact those changes will have," said Janna Quitney Anderson, lead author of the report The Future of the Internet II.
She added: "One of their big concerns is: Who controls the internet architecture they have created?"
Bob Metcalfe, founder of 3Com and the inventor of ethernet, predicted the net would be a global connection of different devices. "The internet will have gone beyond personal communications," by 2020 he wrote.
'Embedded micros'
"Many more of today's 10 billion new embedded micros per year will be on the internet." Louis Nauges, president of Microcost, a French information technology firm, saw mobile devices at the forefront of the net. "Mobile internet will be dominant," he explained. "By 2020, most mobile networks will provide one-gigabit-per-second-minimum speed, anywhere, anytime. "Dominant access tools will be mobile, with powerful infrastructure characteristics. All applications will come from the net."
But not everyone felt a "networked nirvana" would be possible by 2020. Concerns over interoperability (different formats working together), government regulation and commercial interests were seen as key barriers to a universal internet.
Ian Peter, Australian leader of the Internet Mark II Project, wrote: "The problem of the digital divide is too complex and the power of legacy telco regulatory regimes too powerful to achieve this utopian dream globally within 15 years."
'Real interoperability'
Author and social commentator Douglas Rushkoff agreed with Mr Peter.
He wrote: "Real interoperability will be contingent on replacing our bias for competition with one for collaboration. "Until then, economics do not permit universal networking capability."
Many of the surveyed experts predicted isolated and small-scale violent attacks to try and thwart technology's march. "Today's eco-terrorists are the harbingers of this likely trend," wrote Ed Lyell, an expert on the internet and education.
"Every age has a small percentage that cling to an overrated past of low technology, low energy, lifestyle." "Of course there will be more Unabombers," wrote Cory Doctorow of blog BoingBoing. Some commentators felt that the violence would either be tied to the effects of technology, rather than the technology itself, or possibly civil action around issues such as privacy.
"The interesting question is whether these acts will be considered terrorism or civil disobedience," wrote Marc Rotenberg or the Electronic Privacy Information Center.
More than half of respondents disagreed that English would become the lingua franca of the internet by 2020 and that there would be dangers associated with letting machines take over some net tasks such as surveillance and security.
Internet Society Board chairman Fred Baker wrote: "We will certainly have some interesting technologies. He added: "Until someone finds a way for a computer to prevent anyone from pulling its power plug, however, it will never be completely out of control."
The repondents were split over the whether the impact of people's lives becoming increasingly online, resulting in both less privacy but more transparency, would be a positive outcome.
'Access information'
Tiffany Shlain, founder of the Webby awards, said such transparancy would be a benefit to society. "Giving all people access to our information and a context to understand it will lead to an advancement in our civilisation." But NetLab founder Barry Wellman disagreed: "The less one is powerful, the more transparent his or her life. The powerful will remain much less transparent."
Mr Doctorow wrote: "Transparency and privacy aren't antithetical.
"We're perfectly capable of formulating widely honored social contracts that prohibit pointing telescopes through your neighbours' windows. "We can likewise have social contracts about sniffing your neighbours' network traffic."
By 2020 an increasing number of people will be living and working within "virtual worlds" being more productive online than offline, the majority of the respondents said. Ben Detenber, an associate professor at Nanyang Technological University, responded: "Virtual reality (VR) will only increase productivity for some people. For most, it will make no difference in productivity (i.e., how much output); VR will only change what type of work people do and how it is done."
Glenn Ricart, a board member at the Internet Society, warned also of potential dangers. He envisaged "an entire generation opting-out of the real world and a paradoxical decrease in productivity as the people who provide the motive economic power no longer are in touch with the realities of the real world".
HOW RESPONDENTS ASSESSED SCENARIOS FOR 2020 |
|
||
|
|
||
|
Agree |
Disagree |
No response |
A global, low-cost network thrives |
56% |
43% |
1% |
English displaces other languages |
42% |
57% |
1% |
Autonomous technology is a problem |
42% |
54% |
4% |
Transparency builds better world, even at the expense of privacy |
46% |
49% |
5% |
Virtual reality is a drain for some |
56% |
39% |
5% |
The internet opens worldwide access to success |
52% |
44% |
5% |
Some Luddites/Refuseniks will commit terror acts |
58% |
35% |
7% |
Source: Pew Center |
|
|
|
So where will MySpace /Facebook etc take us …
http://www.time.com/time/magazine/article/0,9171,1655722,00.html
Thursday, Aug. 23, 2007
Why Facebook Is the Future By Lev Grossman
On Aug. 14 a computer hacker named Virgil Griffith unleashed a clever little program onto the Internet that he dubbed WikiScanner. It's a simple application that trolls through the records of Wikipedia, the publicly editable Web-based encyclopedia, and checks on who is making changes to which entries. Sometimes it's people who shouldn't be. For example, WikiScanner turned up evidence that somebody from Wal-Mart had punched up Wal-Mart's Wikipedia entry. Bad retail giant.
WikiScanner is a jolly little game of Internet gotcha, but it's really about something more: a growing popular irritation with the Internet in general. The Net has anarchy in its DNA; it's always been about anonymity, playing with your own identity and messing with other people's heads. The idea, such as it was, seems to have been that the Internet would free us of the burden of our public identities so we could be our true, authentic selves online. Except it turns out--who could've seen this coming?--that our true, authentic selves aren't that fantastic. The great experiment proved that some of us are wonderful and interesting but that a lot of us are hackers and pranksters and hucksters. Which is one way of explaining the extraordinary appeal of Facebook.
Facebook is, in Silicon Vall--ese, a "social network": a website for keeping track of your friends and sending them messages and sharing photos and doing all those other things that a good little Web 2.0 company is supposed to help you do. It was started by Harvard students in 2004 as a tool for meeting-- or at least discreetly ogling--other Harvard students, and it still has a reputation as a hangout for teenagers and the teenaged-at-heart. Which is ironic because Facebook is really about making the Web grow up.
Whereas Google is a brilliant technological hack, Facebook is primarily a feat of social engineering. (It wouldn't be a bad idea for Google to acquire Facebook, the way it snaffled YouTube, but it's almost certainly too late in the day for that. Yahoo! offered a billion for Facebook last year and was rebuffed.) Facebook's appeal is both obvious and rather subtle. It's a website, but in a sense, it's another version of the Internet itself: a Net within the Net, one that's everything the larger Net is not. Facebook is cleanly designed and has a classy, upmarket feel to it--a whiff of the Ivy League still clings. People tend to use their real names on Facebook. They also declare their sex, age, whereabouts, romantic status and institutional affiliations. Identity is not a performance or a toy on Facebook; it is a fixed and orderly fact. Nobody does anything secretly: a news feed constantly updates your friends on your activities. On Facebook, everybody knows you're a dog.
Maybe that's why Facebook's fastest-growing demographic consists of people 35 or older: they're refugees from the uncouth wider Web. Every community must negotiate the imperatives of individual freedom and collective social order, and Facebook constitutes a critical rebalancing of the Internet's founding vision of unfettered electronic liberty. Of course, it is possible to misbehave on Facebook--it's just self-defeating. Unlike the Internet, Facebook is structured around an opt-in philosophy; people have to consent to have contact with or even see others on the network. If you're annoying folks, you'll essentially cease to exist, as those you annoy drop you off the grid.
Facebook has taken steps this year to expand its functionality by allowing outside developers to create applications that integrate with its pages, which brings with it expanded opportunities for abuse. (No doubt Griffith is hard at work on FacebookScanner.) But it has also hung on doggedly to its core insight: that the most important function of a social network is connecting people and that its second most important function is keeping them apart.
*************************************************************************************
Who rules the internet?
http://news.bbc.co.uk/go/pr/fr/-/2/hi/technology/4871638.stm
Unease over how the net is run
Internet governance issues usually attract the attention of a relatively small number of net users. However, concerns associated with the current system have begun to grow, writes internet law professor Michael Geist.
The Internet Corporation for Assigned Names and Numbers (Icann), the US-based body charged with managing the net's domain name system, just wrapped up a week-long meeting in Wellington, New Zealand on Friday, and it now finds itself the target of criticism from some its closest allies.
Icann, which then-US President Bill Clinton established in the late 1990s, initially viewed itself as a technical body mandated with ensuring that the net functioned in a stable and secure manner.
While stability and security remain an important objective, today no one seriously questions the fact that internet governance extends far beyond technical concerns.
The introduction of new top-level domains is a major issue for domain name registrars, who rightly note that Icann exerts strong regulatory control over the size and scope of the domain name marketplace.
It has moved frustratingly slowly in establishing new domain name extensions, with only handful, such as .biz or .info, appearing on the market in recent years.
Online politics
Governments have also taken an increasing interest in Icann, focusing primarily on their own national country-code top-level domains such as .uk for the United Kingdom.
The power of Icann, and by extension the US government, to influence these domains has raised serious questions about the intersection between the internet and national sovereignty as governments maintain that they should be final arbiters over their country-code domains.
Many governments have also wondered why Icann has been so slow to establish multi-lingual domains that would allow their citizens to register domain names in their native language. While the issue has been a priority for many developing countries, Icann has not moved at net speeds on the issue.
Other Icann policies have attracted the interest of a diverse group of communities. The privacy community has worked with Icann for years without success to establish an appropriate "whois" policy, which addresses the conditions under which the personal information of someone registering a domain name is publicly disclosed.
The free speech community has actively called on Icann to examine its policy for resolving domain name disputes, expressing disappointment that the current policy has been used to shut down legitimate criticism websites.
Despite the mounting frustration with Icann, until recently it could count on support from the US government and the administrators for several leading country-code domains.
At last year's World Summit on the Information Society in Tunisia, Icann overcame opposition from Europe and the developing world to retain responsibility over the domain name system. Over the past month, however, even Icann's most ardent supporters have begun to express doubts about the organization's lack of transparency and accountability.
Pressure on Icann
Last week, US Congressman Rick Boucher called for a Congressional investigation into Icann and its recent decision to settle litigation with Verisign, which manages the lucrative .com registry.
The settlement, which awards Verisign near permanent control over the .com domain, has faced sharp criticism from across the internet governance community.
In Canada, the Canadian Internet Registration Authority, Cira, recently published an open letter to Icann calling on it to implement greater accountability, transparency, and fair processes.
Backing up its words with actions, Cira said that until Icann addressed these concerns, it would suspend payment of thousands of dollars in contributions and cease consideration of a new contractual agreement with the organisation. Moreover, Cira added that it would no longer host or sponsor any Icann-related events.
The net supervisory body has also come under fire from the Public Interest Registry, PIR, which manages the .org domain.
Last week it called on Icann to address concerns over the thriving business of grabbing domain names that have not been re-registered.
PIR noted that many registrants are unaware that their domain names are valuable and that allowing them to lapse may lead to their misuse.
It pointed specifically to one instance where a domain name associated with a rape crisis centre was not re-registered and soon after pointed to a pornographic website.
Internet governance policies strike at the core of free speech, privacy, and a competitive marketplace.
Icann's seeming inability to address these issues in an accountable, transparent, and timely manner has alienated some of its strongest supporters, opening the door to the prospect for major changes to the global internet governance landscape.
Has the internet improved the political landscape ?
Record Percentage Of Americans Use Internet For Politics, Survey Finds
By Sarah Lai Stirland June 15, 2008
A record percentage of Americans have used the internet to participate in the most closely-watched presidential election in decades, finds a newly-released survey from the non-partisan Pew Internet & American Life Project.
The spring 2008 survey finds that a record-breaking 46 percent of all Americans have used the internet, e-mail or cell-phone text messaging to participate in the political process.
The survey found that the internet is becoming an increasing part of the norm of political participation -- people are using it to read the news, share their views, or to participate in some other process to get others to take political action.
"In this season, just the twelfth year of presidential politics online, there is no disputing the fact that the internet has moved from the periphery to the center of national politics," write Aaron Smith, a research specialist and Lee Rainie, the Pew project's director in the new survey.
Bloggers in general are having a huge impact on the course of election, but the kind of audio and video that they're digging up is playing a significant role in driving the news cycle, the researchers write.
The Huffington Post' recording of Barack Obama calling a portion of the electorate bitter over job losses and clinging to their guns and religion at a San Francisco fund-raiser is an example, they write. "The event became a central narrative of the campaign heading into the Pennsylvania primary," they write.
Other pivotal internet moments that the researchers point to: the online conversation and video-viewing of Obama's former pastor the Reverend Jeremiah Wright and his incendiary sermons; and the controversy stirred up by blogger Bruce Wilson over the sermons of John Hagee, a preacher and televangelist. John McCain severed his ties to Hagee after Wilson posted audio of the preacher arguing that Hitler was an agent of God.
"Some 47 percent of online adults have watched at least one type of online political video (out of a list of five possible types of videos,)" write the Pew researchers. That amounts to 35 percent of all adults.
Overall, the surveyors found that just under a third of all internet users have participated in the online political process through a variety of means: They either forwarded or wrote their own political commentary, signed online petitions, signed up with the campaigns themselves to receive information, forwarded online audio or video segments, or signed up to volunteer for events related to campaigns.
Fewer than one percent of those surveyed had created their own political video or audio recordings, the survey found.
In many ways, the survey's numbers simply confirm anecdotal evidence of the nature of the online campaign so far.
For example, the authors of the survey write: "Simply put, Democrats and Obama backers are more in evidence on the internet than backers of other candidates or parties."
Then they add: "Among Democrats, Obama's supporters are more likely than Hillary Clinton's supporters to be internet users -- 82 % vs. 71%."
Unsurprisingly, the survey found that almost two thirds of Obama supporters get their political news and information on the internet, versus 56% of McCain supporters.
Obama's supporters are more politically active on online
social networks, according to the Pew Internet &
American Life Project.
Data from:
The Pew Internet & American Life Project
The survey also finds that Obama supporters are more "more politically active social networking users than McCain supporters when the two candidates are compared head to head."
At the same time as Americans' use of the internet has grown in the political sphere, there's also a healthy dose of skepticism.
Sixty percent of those surveyed, for example, agree with the statement that "The internet is full of misinformation and propaganda that too many voters believe is accurate." Thirty two percent disagreed with that.
A surprising 74 percent of those surveyed disagreed with the statement that they would not be as involved with the campaign if it weren't for the internet.
But age could be the factor that explains that finding. When the Pew researchers broke that out among age groups, a larger portion of the younger groups of people tended to agree that the internet is important in helping them to stay active and connected with the campaigns.
The survey also finds that Americans are eager to view source materials for themselves -- almost 40 percent of internet users and a third of all adults have gone online to read or watch unfiltered campaign material, such as archived debates, speeches and announcements and position papers.
What technology goodies lie ahead …
From Slate. Article URL: http://www.slate.com/id/2120440/
There's nothing obviously different or magical about Alan Crosswell's computer. The dirty, beige machine sits idle in a nondescript office at Columbia University, where Crosswell directs the school's computer network. Then he lets it loose. In just 2 minutes and 41 seconds, it pulls down more than 500 megabytes of Linux code from servers at Duke University, a task that would normally take hours. Next, Crosswell shows me a violin master class held via videoconference. The DVD-like resolution creates an immediacy that you don't get with choppy streaming video, and the better-than-CD audio allows both the teacher in Canada and the student in New York to hear every nuance.
How are these incredible feats of data transmission possible? Because Columbia has access to the other, better Internet—Internet2.
Yes, there is another Internet. The term "Internet" simply refers to a network of computers. The one that most of us use is Internet1, or the "commodity Internet." Internet2 was created nearly a decade ago by academics at research universities as a noncommercial prototype—something like what the Internet was back when just a few university researchers were logged on to ARPANET.
Like the commodity Internet, Internet2 comprises servers, routers, switches, and computers that are all connected together. Routers decide which way to send information, and servers handle Web site requests and store information for retrieval. What makes Internet2 so different is that it has many fewer users and much faster connections.
While Internet1 is open to pretty much anyone with a computer, access to Internet2 is limited to a select few, and its backbone is made up entirely of large-capacity fiber-optic cables. Rather than Internet1, which is cobbled together out of old telephone lines, Internet2 was built for speed—the roads are all wide and smooth, like your own private autobahn. Internet2 moves data at 10 gigabits per second and more, compared with the 4 or so megabits you'll get using a cable modem. As a result, Internet2 moves data 100 to 1,000 times faster than the old-fashioned Internet.
More than 200 universities, 70 private companies, 45 government agencies, and 45 international organizations log on to Internet2 every day. Your work computer might be linked to Internet2 already—you can use this Java applet to find out. There are no secret Web addresses or special browsers required to log on, no buttons saying, "Click here for Internet2." Organizations that want to join up must demonstrate a research-related purpose, pay dues, and meet minimum technical requirements so they don't slow down the rest of the Internet2 empire.
When you set up a super-fast Internet connection on a college campus, not everyone is going to use it for research. In the last two months, the RIAA has announced two separate groups of lawsuits against students who allegedly shared music using an Internet2-specific file-sharing site called i2hub.com. Wire reports on the lawsuits claim that an Internet2 connection allows you to download "a DVD-quality copy of the popular movie The Matrix in 30 seconds." I didn't get a chance to try any field tests. When I tried to persuade Columbia's Crosswell to let me download a couple of movies for my personal collection, he politely declined.
So, will Internet2 be the downfall of the music and film industries? Probably not. Those 30-second download speeds you're reading about are theoretical. Some universities put caps on how much data individual users can transfer, or how fast they can send and receive data on certain computers. Plus, the hardware in most home computers—the network cards, for example—isn't fast enough to keep up with Internet2 speeds.
The RIAA isn't completely safe, though. Not too far in the future, cable companies will probably sell Internet2-like download speeds to home users. However, most people won't ever use Internet2 itself.
Internet2 was never designed to replace the Internet most of us are using now. It's more like a beach or a restaurant—great when not too many people know about it, frustrating when everybody and his mother starts to show up. Internet2's promoters like to compare it to early research networks that fostered the creation of canonical apps like the World Wide Web and e-mail. So, even if you never use Internet2 to download movies at hyperspeed, you still might benefit from the research. Let's just hope they let us use e-mail2.
But then we still have that old digital divide……
http://sfgate.com/cgi-bin/article.cgi?f=/c/a/2008/04/11/EDJU103F1U.DTL
When more than 3 million voters under age 30 turned out for recent caucuses and primaries, they staked a claim as a major force shaping this historic presidential election. Because so many leave college with, on average, $20,000 in debt during a recession economy and are entering a job market with fewer opportunities to earn a decent living, energized young Americans are yearning to help solve America's problems, address the mounting issues of income disparity, and contribute to the health and well-being of their communities. At the same time, a call for enhanced national public service is part of the presidential candidates' campaign platforms.
Thus, this is a singular moment in which to demand a larger and bolder vision to propel all Americans, across generations, fully into the 21st century. It's time for a Digital New Deal.
Even though we inhabit a technologically saturated environment, America is not keeping pace in its capacity as a technological world leader. In the array of studies comparing Internet infrastructures across nations, the highest America ranks in any of them is 4th - in network readiness to compete globally - but 24th among industrialized nations in broadband penetration to U.S. households. These rankings show that America has a ways to go to remain competitive in the dynamic global economy, not to mention protecting itself from cyber-terrorism and other Internet high jinks.
Our next president can help reconstruct America's fragmented and relatively weak public communications infrastructure by using the most effective tool our youth wield - the power and depth of their digital fluency.
This eager, highly knowledgeable, connected and multitasking first generation of digital natives - "millennials" coming of age now who have used computers and the Internet since childhood - can be put to work in a WPA-inspired Digital New Deal to build out a networked national public commons that bolsters our international competitiveness.
Free of commercial data-mining and the ultra-marketing of social networks like MySpace and Facebook, this new online public sphere would evolve into a robust multitude of open channels and spaces where people could safely share ideas, experiment with innovative design, and debate issues and policies. The talents and organizing skills of the millennial generation, whose numbers now exceed their Baby Boomer parents, can be harnessed to connect citizens across online communities and amplify America's independent media voices and visions globally. As a benefit, these Digital New Deal-makers will earn a living wage, be able to retire college debt and develop a lifelong commitment to the public good.
What will this work look like? Youth-driven teams will design tools, social networks and online environments that bolster and stimulate community-building and citizen participation. They would work with information technology specialists to democratize the next generation of broadband access. And they can creatively partner with nonprofits, public schools and communities to build technological and networking capacity that will help us address challenges such as climate change, lack of health care and economic hardship.
The Digital New Deal will also foster a much-needed intergenerational knowledge exchange. Professional development goes both ways - young people showing their elders how to take advantage of Web 2.0 while public sector leaders and educators pass on the experience and wisdom they have gained working as organization builders. The expertise and enthusiasm of millennials and Boomers are complementary and can transform America's public communications sphere - if we make this knowledge exchange a priority.
When Franklin D. Roosevelt put millions of Americans to work designing, building and repairing our country's roads, parks, buildings and schools, they were beautifully constructed for generations to use and enjoy. The construction of a widely accessible broadband digital network now ranks as equally important with that of President Roosevelt's public works infrastructure expansion in the last century.
Like other moments in American history when far-reaching public works initiatives were implemented, there will be cynicism and disdain along with relentless fear-mongering to bring down this "activist" government program. But the benefits of a Digital New Deal are vast and cannot be underestimated.
Creative potential will be unleashed through new media and social networking pathways in ways we have never experienced, influencing where we live and how we work. Young people will be able to acquire entrepreneurial and leadership skills needed for a 21st century workforce, and the public sector will be recharged and better prepared to handle problems of our time.
As the economy falters and technological innovation slows, the Digital New Deal can translate into trillions of dollars for a U.S. economy wired for the online demands of the 21st century. It will create new skill sets and jobs for people who are now struggling, and bring new participants into the information economy. Without a large-scale public sector agenda, private enterprise will simply not provide this on its own.
Imagine after the 2008 election, a swarm of arts and culture leaders, public interest and policy advocates, energetic young software developers, philanthropists, media reformers and forward-thinking politicians banding together in a broad coalition to construct this Digital New Deal. How this investment in our future would be implemented- including public and private partnerships - is a debate well worth having.
Helen De Michiel is the national co-director of the National Alliance for Media Arts and Culture (NAMAC), based in San Francisco.
That’s all folks
Colin
******************************************************************************************************
Topics
December
2004 Topic Survey Results