Open Distributed Scientific Annotations Cloud
Each reader of scientific paper can publish their annotations to a distributed public annotations cloud, others can load as they read, and discuss.
So, let's say you are reading a paper, have ideas and annotations during the process of reading. You click (or point) to the location, where you want to add an annotation. The system takes the context of the location on the paper (e.g., the reader extracts large enough context of surrounding words or sentences, which uniquely identifies the location, and allows later display the same annotation around the same text in other formats - be it HTML on the web, or other. If that's a picture, then the picture features are extracted via, and the pixel location, allowing to display the same annotation on top of the same image in other formats). Essentially, we would have the context IDs and coordinates with context associated with feature sets, with 1:1 correspondence between context IDs and feature sets, and 1:many correspondence between context ID and annotation.
Then, who-ever reads the paper, in what-ever reader, they could load public annotations, browse their history. This would be nice to have a conversation per annotation. E.g., each annotation creates a possibility for thread of comments. Inside the comments, you could refer to other annotations.
Moreover, each paper would have its paperid generated based on the feature extraction from the paper's text, especially title, summary, and, if there exists, just use the DOI. It seems good to make such system as widely usable as possible, not just for scientific papers, but for any PDFs in general.
Hopefully, this would make reading papers not a lonely activity at all, and cross-pollination of ideas lead to many new developments.
A hypothetical electromechanical device enabling individuals to develop and read a large self-contained research library, create and follow associative trails of links and personal annotations, and recall these trails at any time to share them with other researchers, and would closely mimic the associative processes of the human mind.
重要なのは場所ですか、それとも実際のテキストスニペットに注釈が付けられていますか?おそらく、テキストスニペットを何らかの方法でハッシュしてから、任意のドキュメントを処理し(ほとんどの場合、テキストを抽出するだけで、任意の形式で)、注釈を付けることができるスニペットのハッシュのセットを生成できます(巧妙な形式のローリングハッシュなどを使用) )。次に、それを使用して、何らかの形式のコンテンツアドレス指定システムを使用して、これまでに作成されたすべての注釈をフェッチします。
あなたの考えは私に少しザナドゥを思い出させます。
Is it the location that is important, or the actual text snippet being annotated? Maybe you can somehow hash the text snippet, and then process any document (mostly in any format, just extracting the text) and produce the set of hashes of snippets that could ever be annotated in it (using some clever form of rolling hash or so). Then use that to fetch all the annotations ever created for it using some form of content-addressing system.
Your idea reminds me a bit of Xanadu.
たぶん、このアイデアは、Web上で公開されている任意のコンテンツに拡張できますか?
Maybe this idea could be extended to any content published on the web?