General New Media

the end of history as we know it

A Chronicle article – No Computer Left Behind – by Daniel J. Cohen and Roy Rosenzweig of GMU (requires an account), discusses how access to Internet information makes multiple-choice tests redundant, and in that addresses the issue of how to trust information on the web.

Computer scientists have an optimistic answer for worried scholars. They argue that the enormous scale and linked nature of the Web make it possible for it to be “right” in the aggregate while sometimes very wrong on specific pages. The Web “has enticed millions of users to type in trillions of characters to create billions of Web pages of on average low-quality contents,” write the computer scientists Rudi Cilibrasi and Paul Vitányi in a 2004 essay.Yet, they continue, “the sheer mass of the information available about almost every conceivable topic makes it likely that extremes will cancel and the majority or average is meaningful in a low-quality approximate sense.” In other words, although the Web includes many poorly written and erroneous pages, taken as a whole the medium actually does quite a good job encoding meaningful data.

“Good enough” is good enough for multiple-choice tests.  Now what we need is a way of comparing all that information to get a good enough answer.  Google does it for translations.  And George Mason is experimenting with it with H-Bot, a historical software agent.  Give it a historical question, it uses a set of algorithms to compare documents concerning the subjects embedded in the question to return an accurate answer.

Right now H-Bot can only answer questions for which the responses are dates or simple definitions of the sort you would find in the glossary of a history textbook. For example, H-Bot is fairly good at responding to queries such as “What was the gold standard?”, “Who was Lao-Tse?”, “When did Charles Lindbergh fly to Paris?”, and “When was Nelson Mandela born?” The software can also answer, with a lower degree of success, more difficult “who” questions such as “Who discovered nitrogen?” It cannot currently answer questions that begin with “how” or “where,” or (unsurprisingly) the most interpretive of all historical queries, “why.” In the future, however, H-Bot should be able to answer more difficult types of questions as well as address the more complicated problem of disambiguation—that is, telling apart a question about Charles V the Holy Roman Emperor (1500-1558) from one about Charles V the French king (1338-1380). To be sure, H-Bot is a work in progress, a young student eager to learn. But given that its main programming has been done without an extensive commitment of time or resources by a history professor and a (very talented) high-school student, Simon Kornblith, rather than a team of engineers at Google or MIT, and given that a greater investment would undoubtedly increase H-Bot’s accuracy, one suspects that the software’s underlying principles are indicative of the promise of the Web as a storehouse of information.  (Web of Lies)

My first thought was to the state of knowledge underlying the algorithm.  Wouldn’t it take a discerning expert to determine the accuracy of the response?  Sure it might.  H-Bot provides the target that can be checked by the expert.  But after a while, even the expert comes to trust the algorithm.

Cohen and Rosenzweig finally argue that once we trust the factual accuracy of look up information, we can set aside multiple-choice tests and move on to more interesting – deeper and more significant – questions.

Now that newer technology threatens the humble technology of the multiple-choice exam, we have an opportunity to return to some of the broader and deeper measures of understanding in history — and other subjects.

I’d look forward to this because it means we’d have to redesign exams.  Change the form of the exam and you change what’s taught.

Listening to Dancing Shoes from “Whatever People Say I Am That’s What I Am Not” by Arctic Monkeys

General New Media Wikis

corporate wiki melodrama

The End of E-Mail presents a melodramatic take on wikis for content coordination that will interest professional writing students in E-Rhetoric and Weblogs and Wikis.

The simple act of writing a press release, for example, can require as many as five people trading e-mails over the course of a week, often sending attachments of newly edited versions. It leads to lots of confusion. Which document is the current version? Who has made changes? Who still needs to weigh in? For the managers, keeping track of it all had become a time-consuming nightmare.

The hero enters (and note the name and his specialization):

Tom Biro joined the company as director of new media strategies in August 2005 and was shocked at the mess the company’s e-mail system had become. Fortunately, he had an answer: wikis.

And here are some of the virtues reported (with the stock hyperbole): doubled productivity, slashed meetings.

By eliminating the need to use e-mail to trade project updates, creative teams have been able to double their productivity, Biro says. The wiki also has slashed the number of meetings and conference calls: Anyone can simply pull up the wiki on his or her Web browser and get a full progress report at any time.

In this story, the wiki is positioned as The Fortunate Answer to a Pervasive Problem, a usurper which foretells The End of E-Mail. It’s a melodrama.

Listening to Destiny from “Simple Things” by Zero 7

General New Media Wikis

watching business sell wikis

My Google Alert for wiki is telling me that more and more businesses and organizations are discovering wikis and blogs and rss, and more software developers are creating systems to manage them. The latest is Stellent’s Wiki, Blog, RSS Organizer.

The Stellent Universal Content Management helps customers integrate wikis and blogs in multi-site Web content management.

The curious thing from a rhetorical angle is how Stellent describes the system’s functions. This is what a wiki looks like through marketing eyes:

Stellent Universal Content Management lets wiki contributors create hyperlinks in both pattern-matching and wizard style formats, which in turn allows users to link to other topics and pages within a wiki site, as well as other Web sites.

When an author creates a new hyperlink about a particular subject, the Stellent system will automatically link to a wiki page about that topic. If the page does not exist, it will automatically create a new page.

Those are basic functions of any wiki: if the page exists, link to it (pattern-matching). If it doesn’t, create it (wizard style). And here are described, in turn, Recent Changes and Diffs:

The Stellent technology also records a history of wiki activity, so readers know who writes or changes content, how many times content is revised and if there are certain topics currently under heavy debate. A locking and revision control feature ensures only one user may change content at a time, and it also keeps an audit trail of all revisions which is then available for records and retention management purposes.

The concerns and values of the corporate world are clear: tracking who changed what, revision control, audit trails.

What interests and concerns are embedded in Ward’s original definition of wiki as “The simplest online database that could possibly work”? Discuss.

Listening to Everytime We Live Together We Die A Bit More from “The Magnificent Tree” by Hooverphonic

General New Media

sunday reaching towards closure

Students in Weblogs and Wikis are just about ready to start their projects. I was up at 2:00 this morning reviewing some late-submitted proposals and requesting some last-minute clarifications, but most of the projects are approved and nearly ready to go. And there are some interesting projects that push the form:

  • tran-sex-scend me now: a 10 week look at sex and gender issues in books and movies
  • i married you at twenty-two: “a notebook-style blog … that will record the emotions and experiences I’m going through (and have already gone through) as a young person in my first year of marriage.”
  • jones poems: a blog using “poetry and photography to support each other.”
  • absolute tea, in which the blogger will record “my experiences with loose leaf tea in a reputation building manner while providing unique knowledge of tea.”

and a couple of wiki-based project so far

  • scrapbook wiki: an online scrapbook of a trip to New Zealand, including ” photographs, personal written journal, written class journal, blog, and photoblog.”
  • after anarchy tea party wiki: a wiki organized to “define, clarify, and expand on all the possibilities, angles, problems, implications and possible implementations of anarchy in its various forms.”

And I was looking for a way to wrap up this first third of the class, just something to take away, to think about. I ran into this passage by Meg Hourihan, on the backpages of Essential Blogging, Doctorow, et al (O”Reilly, 2002).

When we talk about weblogs, we’re talking about a way of organizing information, independent of its topic. what we write about does not define us as bloggers; it’s how we write about it (frequently, ad nauseam, peppered with links).

Weblogs simply provide the framework, as haiku imposes order on words. The structure of the documents we’re creating enable us to build our social networks on top of it – the distributed conversations, the blogrolling lists, the friendships that begin online and are solidified … in the real world.

As bloggers, we’re in the middle of, and enjoying, an evolution of communication. The traits of weblogs … will likely change and advance as our tools improve and our technology natures. What’s important is that we’ve embraced a medium free of the physical limitations of pages, intrusions of editors, and delays of tedious publishing systems. As with free speech itself, what we say isn’t as important as the system that enables us to say it.

That’s a good passage. It strikes the right tone to start blogging projects and wiki projects. It stirs up some ideas by placing emphasis on the medium and affordances rather than the content. The important third paragraph applies to wikis, as well, and the other two paragraphs point up the differences between the two writing spaces. Romantic, uplifting, memorable, and arguable – which is what I want to leave the class with for the next ten weeks.

That, and a picture of a cat with a glass of wine.

Blogging General New Media Wikis

catching up by looking backwards

Saturday’s to do list: Catch up

And then, on Sunday, …

Listening to Better Than Bad from the album “The Debt Collection” by The Shortwave Set

General New Media Wikis

blamb’s screencasting teasers

Brian Lamb, over on Abject Learning, has been doing some nifty screencasts – one on folksonomy, and one looking at some of the weblogs at UBC. What’s interesting is how Brian connects the audio to a scroll through the material he’s covering on the UBC wiki, and screen images of sites he’s referring to. That little addition of the screenshot – it’s all text – makes the presentation memorable because Brian’s grounding what he’s talking about in a screen image. The screencasts are tightly focused, concise, and – and this is the important part – they are composed to encourage exploring the topic further. Teasers. Models of the genre.

Collect the whole set and trade them them with your friends! I’ve been porting Brian’s screencasts to my video ipod.

General New Media

netiquette on the make

I’ve added a Pod and Video Cast category to my blogroll and ran into a netiquette issue: Do I link to the subscription rss feed, or to an info/subscription page? The former means the link might open in a newsreader or podcast client and download or stream a pretty big, unexpected file. The latter adds a step to getting to the source, but it’s more in line with the convention of linking to pages.

I went for the …

General New Media

no foul

The stars come out for chickens in need: MAIL ORDER CHICKENS.