Shirky, the Semantic Web, and even more on recommendations
I was reading Clay Shirky's The Semantic Web, Syllogism and Worldview, and found it a very interesting and challenging article. I do have a couple of thoughts on his ideas, however, and am not in total agreement.
- The semantic web is (initially) probably likely to be most useful for simple logic statements as opposed to more complex combinations.
- Examples of syllogisms are syllogistic in their nature: if insufficient or contrary information is given, then wrong conclusions will always be jumped to - garbage in, garbage out. The Brooklyn accent example, for instance, could be resolved by removing the generalisation.
- I agree that deductive reasoning alone is unlikely to create artificial intelligence, but a basic set of rules about the nature of things could form the basis for more complex probablilistic reasoning, I reckon. (Not that I'm claiming to know much about AI)
- Of course ontology creation is political and has a social context: all ordering and classification is. We comprehend the world through arbitrary symbols - why write an article if you don't trust the arbitrary rules of words and grammar? This is why I reckon that the semantic web isn't going to bea coherent whole (if it happens), but a series of smaller, overlapping (rather than co-incident) ontologies.
Anyway, I'd recommend the scepticism of the article: healthy, intelligent stuff. On another note, I noticed the following with interest toward the article's close:
"Social networking services [...] assume that people will treat links to one another as external signals of deep association, so that the social mesh as represented by the software will be an accurate model of the real world [...] and as a result, links between people on Friendster have been drained of much of their intended meaning. Trying to express implicit and fuzzy relationships in ways that are explicit and sharp doesn't clarify the meaning, it destroys it."my italics
I don't 100 per cent agree with this. Yes, you cannot express the "true" value of a social network in this way, but if you are honest about precisely what you are attempting to achieve and the fact that the attempt is not going to be a perfect representation, then what's the problem? In my trust model, saying I trust B's opinion on everything is far from close to saying I really trust B's opinion on ice cream, trust him slightly less on sorbet, and don't trust him at all on frozen yoghurt, but I can gain value from expressing this relationship of "general" trust, and it is a far more maintainable (and therefore valuable) network for this.
Here are some comments from others more worthy of commenting than myself (care of Blogdex):