November 23, 2010 By Ulf Wolf
Kentaro Toyama, a researcher in the School of Information at the University of California, Berkeley, recently penned an amazing article for the Boston Review, where he refused to accept the widely accepted digital truth that “more is better” and that a PC in every family world-wide will solve everything.
“Technology—no matter how well designed,” he says in the article, “is only a magnifier of human intent and capacity. It is not a substitute.”
I cannot recommend this well researched, reasoned and articulated article enough; it should be required reading for everyone participating in the assault on the digital divide.
Technology on its own, Mr. Toyama points out, will never solve the problem of poverty, illiteracy, and discrimination. Lacking the desire—and skill—to use technology for betterment (rather than entertainment), and dooms the One Laptop Per Child (OLPC) project to failure.
“The myth of scale is the religion of telecenter proponents, who believe that bringing the Internet into villages is enough to transform them.
“Technology is a magnifier in that its impact is multiplicative, not additive, with regard to social change. In the developed world, there is a tendency to see the Internet and other technologies as necessarily additive, inherent contributors of positive value. But their beneficial contributions are contingent on an absorptive capacity among users that is often missing in the developing world. Technology has positive effects only to the extent that people are willing and able to use it positively.”
He goes on to observe, “When a village has ready access to a PC, the dominant use is by young men playing games, watching movies, or consuming adult content.”
The old adage of leading a horse to the digital river, comes to mind.
The real issue is not technology, or lack thereof. The real issue is human intention, human understanding, human enlightenment, and human education. Technology will only magnify (for better or worse) the underlying human condition.
Please do yourself a great favor. Read the whole article.
November 15, 2010 By Ulf Wolf
Carolyn Kellogg reports in the LA Times that Forrester Research now predicts that eBook sales will total $966 million in 2010, and that next year we will cross the $1 billion line for total eBooks sales.
If the Forrester is to be believed is that people who get the hang of reading eBooks do shift their book-buying from hardcover or paperback to eBooks. As Forrester's James McQuivey writes, “the average eBook reader already consumes 41% of books in digital form.” Carolyn Kellogg adds, “Those who've taken the plunge and gotten a Kindle or other eReaders have an even higher percentage: 2 out of 3 books they read are eBooks.”
I can verify this from personal experience. Since buying a latest generation 6 inch screen Kindle, I have bought one paper book (not available on Kindle) and easily twenty Kindle eBooks. They are easier to read, especially if you like to read in bed, and Kindle’s implementation of its built-in dictionary (New Oxford American Dictionary) is nothing short of brilliant.
Bottom line for me as a reader: If I plan to get a book, I always check Kindle availability first, and if they have it I normally get it right away (it’s downloaded and readable in minutes). If I have questions about the book, I can download a free sample (usually 10% or so of the book itself). If the book is unavailable I usually opt to wait until it is—letting Amazon know that I’d like to see this title on Kindle.
Amazon’s own statistics back this up; they announced in July of this year that in its spring quarter it sold eBooks for every 100 hardcover books.
Still, the Forrester report indicated that today only 7% of online adults who read books read eBooks. Translation: excellent prospective market growth (to put it mildly).
McQuivey writes, "even if we never get color e-Ink screens, if publishers never experiment with eBook subscriptions, and interactive eBook formats never succeed, we will still see digital get close to $3 billion in size by the middle of the decade."
Carolyn Kellogg questions whether eBooks will cannibalize traditional book sales; that’s to say, will the buyer of a new best-seller buy both the hardcover version and the eBook?
My personal answer: I’d never buy paper again, if I can help it.
November 8, 2010 By Ulf Wolf
Aussie Ross Dawson is a well-known Internet media guru who has never been shy in his predictions. In August of this year he predicted that newspapers in their current form will be irrelevant in Australia by the year 2022, an announcement that garnered a fair amount of international attention from quarters like The Australian and UK-based The Guardian.
In a recent blog, he has now come back with an expanded prediction that includes the projected newspaper “irrelevance date” for various nations.
As he puts it on the site, “Part of the point I wanted to make was that this date is different for every country. As such I have created a Newspaper Extinction Timeline that maps out the wide diversity in how quickly we can expect newspapers to remain significant around the world.
“First out is USA in 2017, followed by UK and Iceland in 2019 and Canada and Norway in 2020. In many countries newspapers will survive the year 2040.”
Taken from his site, here are some of the global factors that determine the remaining lifespan of the printed newspaper:
Then there are national factors that extend or shorten the remaining newspaper lifespan, and they vary for each country. Those Ross took into consideration include:
Relative interest in local and global news
Not a Question of If
The digital writing is on the wall. I believe that by 2050 the only print newspapers still surviving will be small, local papers that cover such news that the by now overwhelmingly digital news industry cannot or does not want to cover.
There will also still be a digital divide, primarily due to user preference (which also implies demographics, such as age), and those the reside on the far side of the divide will still support the printed newspaper, and keep the smaller, local papers alive.
Still, it is not a question of “if” the printed paper will eventually go the way of the dinosaurs; it is a question of when.
Ross Dawson is globally recognized as a leading futurist, entrepreneur, keynote speaker, strategy advisor, and bestselling author. He is Founding Chairman of four companies: professional services and venture firm Advanced Human Technologies, future and strategy consulting group Future Exploration Network, leading events firm The Insight Exchange, and influence ratings start-up Repyoot.
Ross is author most recently of Implementing Enterprise 2.0, the prescient Living Networks, which anticipated the social network revolution, and the Amazon.com bestseller Developing Knowledge-Based Client Relationships. He is based in Sydney and San Francisco with his wife jewelry designer Victoria Buckley and two beautiful young daughters.
November 1, 2010 By Ulf Wolf
Have you ever tried to use your cell phone at a well-attended event, say a large outdoor music festival, where the majority also have cell phones they also try to use? The result is usually some sort of “no network at the moment, please try again later” message since there are only so many channels available.
One answer—and the one most commonly applied—is to add more cell towers, or add more channels to the ones available. That, over time, gets very expensive, and providers are, of course, analyzing traffic patterns to see if there’s a return on investment to double the existing channels next to the football stadium, or the festival site—which, since it’s only used sporadically, will most likely not provide sufficient return to augment the local capacity.
Indeed, effective—and economic—bandwidth utilization is at the core of network design, and the solutions and approaches to this problem are many.
One of the most refreshing solutions I’ve come across lately is detailed in an article on the UK Site “The Frontline” and suggests that a network of human nodes will solve the problem.
According to the article, “Researchers at Queen's University of Belfast (QUB) are working on what could be the most sci-fi approach yet to improving the UK's mobile phone coverage - using human beings as network nodes.
“Academics at QUB's Institute of Electronics, Communications and
Information Technology (ECIT) said they believe using mobile sensors carried on the human body could form the backbone of new mobile internet networks.
“However, this doesn't mean putting a chip in your frontal lobe. Instead it would more likely mean that sensors were carried inside other devices, such as the next-generation of smartphones, which would then communicate with one another.”
This, quite brilliant, outlook mirrors in many ways what the Internet as a whole, as well as specific applications like Torrent, do in that it breaks down the information into a multitude of packages and then sends each one over the next available channel (or over as many as are available that instant) for reassembling at the receiving end.
Also, it would in many instances cut out the “middle man” and could communicate node to node rather than node to tower to node.
As the article puts it, “This would create a body-to-body network (BBN) that would allow phones to boost coverage by sending information between themselves before reaching a base station.”
(I love the acronym: BBN)
It would also mean that at that music festival, where you before could not get a channel, you now have an abundance of available nodes at your disposal, and your chances of placing a call, or receiving a text, is in fact greatly improved.
Dr. Simon Cotton, from ECIT, pointed out that individual human-carried sensors—which would be worn on the body, carried on smartphones or even integrated into clothing—would bring improvements in a number of areas.
“It will provide a number of key benefits compared to cellular networks alone such as in disaster situations where cellular infrastructure has become damaged or is unavailable, body-to-body networks could help provide networking for relief workers and civilians,” he explained.
“It promotes the concept of 'green spectrum' whereby we can re-use frequency allocations over much shorter distances, meaning that the precious resource of radio spectrum is utilized much more fully.”
Cotton also said that this technology would “most likely” be able to match LTE speeds as data transfers would only be limited by the availability of network paths that other enabled devices provide, meaning even large HD files could be transferred.
LTE (Long Term Evolution) is a technology that may be used for the next-generation mobile communications network. LTE supports speeds of over 100 Mbit/s downstream and 50 Mbit/s upstream.
“Instead of burdening the cellular network,” Cotton added, “the software intelligently splits the file into smaller streams and transmits the content to multiple nearby persons who then forward data to other humans acting as network nodes and so on.”
“The file is reassembled when reaching the intended user. Data rates here are really only limited by the number of discrete paths the data can find through the network.”
Digital Citizen Engagement - or how Government-IT empowers Citizen Participation and Input - is an important aspect of 21st century life given all the challenges communities face. This is a subject very dear to my heart and one I like to keep a constant finger on. This blog shares my findings and impressions with those interested.
This Digital Communities white paper highlights discussions with IT officials in four counties that have adopted shared services models. Our aim was to learn about the obstacles these governments have faced when it comes to shared services and what it takes to overcome those roadblocks. We also spoke with several members of the IT industry who have thought long and hard about these issues. The paper offers some best practices for shared government-to-government services, but also points out challenges that government and industry still must overcome before this model gains widespread adoption.