The following table consists of the major metadata for each paper, with a links to the PDF versions of all the full papers. PDFs for the short papers will be added soon.
An analysis of the bibliographies shows that the most significant source of references remains the ACM Hypertext conference series. You can see a visualisation of how the previous work of the hypertext community has influenced these papers in the following graph.
|Session 1: (Wednesday am) Mixed Reality Hypermedia|
|1||HyperReal: A Hypermedia Model for Mixed Reality ||Luis Romero, Nuno Correia |
Open Hypermedia, Fundamental Open Hypermedia Model (FOHM), Adaptive Hypermedia, Linking, Navigation
This paper describes a generic hypermedia model that is used as a framework for building context aware and mixed reality applications. It can handle different media elements, and it defines a presentation scheme that abstracts several relevant navigation concepts, including link awareness. The model specifies a base structure for the relation between spaces, either real or virtual, and supports contextual mechanisms. Additionally, it establishes a way to correlate real/virtual world objects with information present in the hypermedia graph. It also includes store/replay mechanisms that can be used to repurpose the content in new ways, including storytelling applications. The proposed model is being tested in a gaming and storytelling environment that integrates the real world, media elements and virtual 3D worlds. The paper presents the overall framework, the current implementation and evaluates its usage in the prototype application.
|2||Physical Hypermedia: Organising Collections of Mixed Physical and Digital Material ||Kaj Gronbaek, Jannie F. Kristensen, Peter Orbaek, Mette Agger Eriksen |
Linking, Evaluation, Accessibility, spatial hypermedia, augmented reality, tagging
This paper presents empirical examples of how people use collectional artifacts and organize physical material such as paper, samples, models, mock-ups, plans, etc. in the real world. Based on this material, we propose concepts for collectional actions and meta-data actions, and present prototypes combining principles from augmented reality and hypermedia to support organising and managing mixtures of digital and physical materials. The prototype of the tagging system is running on digital desks and walls utilizing RFID tags and tag-readers. It allows users to tag important physical materials, and have these tracked by antennas that may become pervasive in our work envi-ronments. We work with three categories of tags: simple object tags, collectional tags, and tooltags invoking operations such as grouping and linking of physical material. Our primary application domain is architecture and design, thus we discuss use of augmented collectional artifacts primarily for this domain.
|3||The Ambient Wood Journals - Replaying the Experience |
Engelbart award candidate
|Mark J. Weal, Danius T. Michaelides, Mark K. Thompson, David De Roure |
Adaptive Hypermedia, Narrative
The Ambient Wood project aims to facilitate a learning experience using an adaptive infrastructure in an outdoor environment. This involves sensor technology, virtual world orchestration, and a wide range of devices ranging from hand-held computers to speakers hidden in trees. Whilst performing user trials of the Wood, the activities of children participating in the experiments were recorded in detailed log files. An aim of the project has been to replay these log files using adaptive hypermedia techniques to enable the children to further reflect on their experience back in the classroom environment.
|Session 2: (Wednesday am) Emergent Web Patterns|
|4||Extracting Evolution of Web Communities from a Series of Web Archives ||Masashi Toyoda, Masaru Kitsuregawa |
Link Analysis, web community, evolution
Recent advances in storage technology make it possible to store a series of large Web archives. It is now exciting challenge for us to observe evolution of the Web. In this paper, we propose a method for observing evolution of web communities. A web community is a set of web pages created by individuals or associations with a common interest on a topic. So far, various link analysis techniques have been developed to extract web communities. We analyze evolution of web communities by comparing four Japanese web archives crawled from 1999 to 2002. Statistics of these archives and community evolution are examined, and the global behavior of evolution is described. Several metrics are introduced to measure the degree of web community evolution, such as growth rate, novelty, and stability. We developed a system for extracting detailed evolution of communities using these metrics. It allows us to understand when and how communities emerged and evolved. Some evolution examples are shown using our system.
|5||The Connectivity Sonar: Detecting Site Functionality by Structural Patterns |
Engelbart award candidate
Nelson award winner
|Einat Amitay, David Carmel, Adam Darlow, Ronny Lempel, Aya Soffer |
Link Analysis, Hypertext Structure, Search Engines, Data Mining, World Wide Web, Web graphs, Web Information Retrieval
Web sites today serve many different functions, such as corporate sites, search engines, e-stores, and so forth. As sites are created for different purposes, their structure and connectivity characteristics vary. However, this research argues that sites of similar role exhibit similar structural patterns, as the functionality of a site naturally induces a typical hyperlinked structure and typical connectivity patterns to and from the rest of the Web. Thus, the functionality of Web sites is reflected in a set of structural and connectivity-based features that form a typical signature. In this paper, we automatically categorize sites into eight distinct functional classes, and highlight several search-engine related applications that could make immediate use of such technology. We purposely limit our categorization algorithms by tapping connectivity and structural data alone, making no use of any content analysis whatsoever. When applying two classification algorithms to a set of 202 sites of the eight defined functional categories, the algorithms correctly classified between 54.5% and 59% of the sites. On some categories, the precision of the classification exceeded 85%. An additional result of this work indicates that the structural signature can be used to detect spam rings and mirror sites, by clustering sites with almost identical signatures.
|6||Automatically Sharing Web Experiences through a Hyperdocument Recommender System ||Alessandra Alaniz Macedo, Khai Nhut Truong, Jose Antonio Camacho-Guerrero, Maria da Graca Campos Pimentel |
Navigation, Linking, World Wide Web, Open Hypermedia, Semantics, Recommendation System
As an approach that applies not only to the support user navigation on the web, recommender systems have been built to assist and augment the natural social process of asking for recommendations from other people. In a typical recommender system, people provide recommendations as inputs, which the system aggregates and directs to appropriate recipients. In some cases, the primary transformation is in the aggregation; in others the value of the system lies in its ability to make good matches between the recommenders and those seeking recommendations. In this paper we discuss architectural and design features of WebMemex, a system that (a) provides recommended information based on capturing the history of navigation from a list of people well-known to the users --- including the users themselves, (b) allows the user to have access from any networked machine, (c) demands user authentication to access the repository of recommendations, and (d) allows the user to specify when capturing of her history should be performed.
|Session 3: (Wednesday pm) Hypermedia Semantics|
|7||Which Semantic Web? |
Engelbart award candidate
|Catherine C. Marshall, Frank M. Shipman |
Semantic Web, World Wide Web, Ontologies, Metadata, Hypertext Theory
Through scenarios in the popular press and technical papers in the research literature, the promise of the Semantic Web has raised a number of different expectations. These expectations can be traced to three different perspectives on the Semantic Web. The Semantic Web is portrayed as: (1) a universal library, to be readily accessed and used by humans in a variety of information use contexts; (2) the backdrop for the work of computational agents completing sophisticated activities on behalf of their human counterparts; and (3) a method for federating particular knowledge bases and databases to perform anticipated tasks for humans and their agents. Each of these perspectives has both theoretical and pragmatic entailments, and a wealth of past experiences to guide and temper our expectations. In this paper, we examine all three perspectives from rhetorical, theoretical, and pragmatic viewpoints with an eye toward possible outcomes as Semantic Web efforts move forward.
|8||Finding the Story - Broader Applicability of Semantics and Discourse for Hypermedia Generation ||Lloyd Rutledge, Martin Alberink, Rogier Brussee, Stanislav Pokraev, William van Dieten, Mettina Veenstra |
Narrative, Semantics, Discourse, Hypermedia, Clustering, Concept Lattices, RDF, SMIL
Generating hypermedia presentations requires processing constituent material into coherent, unified presentations. One large challenge is creating a generic process for producing hypermedia presentations from the semantics of potentially unfamiliar domains. The resulting presentations must both respect the underlying semantics and appear as coherent, plausible and, if possible, pleasant to the user. Among the related unsolved problems is the inclusion of discourse knowledge in the generation process. One potential approach is generating a discourse structure derived from generic processing of the underlying domain semantics, transforming this to a structured progression and then using this to steer the choice of hypermedia communicative devices used to convey the actual information in the resulting presentation. This paper presents the results of the first phase of the Topia project, which explored this approach. These results include an architecture for this more domain-independent processing of semantics and discourse into hypermedia presentations. We demonstrate this architecture with an implementation using Web standards and freely available technologies.
|Session 4: (Wednesday pm) Adaptive Hypermedia (1)|
|9||Integrating User Operations in Multichannel Hypermedia ||Franca Garzotto, Vito Perrone |
Short paper: Adaptive Hypermedia, Hypertext Structure, conceptual modelling, multi-channel, services on the Web, Web operations, context, UML, OCL
Web Applications are progressively becoming multi-channel and cross-channel. The “same” service should be made available in different delivery environments and devices. A user may invoke a service on one device, suspend it, and complete its execution in another one. In this paper we provide the reader with the main concepts and innovative aspects of MC2 a design framework for specifying Multi/Cross Channel web application services. MC2 adopts an high-level, end user perspective and exploits the notion of context, to characterize who, where and how an operation can be invoked.
|10||Pocket News: News Contents Adaptation for Mobile User ||Youn-Sik Hong, In-Sook Park, Jeong-Taek Ryu, Hye-Sun Hur |
Short paper: Adaptive Hypermedia, Evaluation, Linking
We have presented a system called Pocket News that transforms web contents in the internet automatically and directly to the contents to be adapted for a mobile terminal, especially PDA. It is adequate for frequently changed web sites, like as news contents. We have also proposed a page splitting technique to navigate mobile pages with button controls instead of conventional scroll up/down controls. The proposed system also produces mHTML page for mobile phone which supports the Mobile Explorer.
|11||AHA! The Adaptive Hypermedia Architecture||Paul De Bra, Ad Aerts, Bart Berden, Barend de Lange, Brendan Rousseau, Tomi Santic, David Smits, Natalia Stash |
Technical Briefing: Adaptive hypermedia, adaptive presentation, adaptive navigation support, authoring support
AHA!, the Adaptive Hypermedia Architecture, was originally developed to support an on-line course with some user guidance through conditional (extra) explanations and conditional link hiding. This paper describes the many extensions and tools that have turned AHA! into a versatile adaptive hypermedia platform. It also shows how AHA! can be used to add different adaptive features to applications such as on-line courses, museum sites, encyclopedia, etc. The architecture of AHA! is heavily inspired by the AHAM reference model
|Session 5: (Thursday am) Link Aggregation|
|12||Untangling Compound Documents on the Web ||Nadav Eiron, Kevin S. McCurley |
Data Mining, Hypertext Structure, Link Analysis, Search Engines, Semantic Web
Most text analysis is designed to deal with the concept of a ``document``, namely a cohesive presentation of thought on a unifying subject. By contrast, individual nodes on the World Wide Web tend to have a much smaller granularity than text documents. We claim that the notions of ``document`` and ``web node`` are not synonomous, and that authors often tend to deploy documents as collections of URLs, which we call ``compound documents``. In this paper we present new techniques for identifying and working with such compound documents, and the results of some large-scale studies on such web documents. The primary motivation for this work stems from the fact that information retrieval techniques are better suited to working on documents than individual hypertext nodes.
|13||Providing Support for Browsing Intricately Interconnected Paths ||Pratik Dave, Unmil P. Karadkar, Richard Furuta, Luis Francisco-Revilla, Frank Shipman, Suvendu Dash |
Navigation, Path-centric browsing, Navigation metaphors, Directed paths, Walden's Paths, Path Engine.
Paths have long been recognized as an effective medium for communicating knowledge. They have been included within hypermedia systems as supporting tools to organize and present information. Graph-centric or Node-centric browsing are the two commonly identified hypertext-browsing paradigms. We believe that Path-centric browsing, the browsing behavior exhibited by path interfaces, is an independent browsing paradigm that combines useful aspects of the two commonly supported cases. The Walden's Paths project supports Path-centric traversal over Web-based materials. This paper expands the notion of our paths to include more generalized structures and interconnections across paths. We present an architecture for describing complex networks of such paths. We discuss the design and present a prototype implementation of the Path Engine, a tool that provides a linear interface to intricately interconnected paths.
|14||Publishing Evolving Metadocuments on the Web ||Andruid Kerne, Madhur Khandelwal, Vikram Sundaram |
Metadocuments are documents that consist primarily of references to other documents. Our active browsing web visualization tool generates an evolving series of navigable metadocument snapshots over time. It conducts expression-directed automatic retrieval of information from the web. The granularity of browsing is shifted from documents to the finer grained information elements, which are metadocument constituents. While the user can engage in direct manipulation expressions of interest and design, the program performs procedural visual composition of the information elements to form spatial hypertext. As prior versions of the tool lacked the save/load capability, they were entirely process-oriented. The metadocuments existed only as transient states. This paper is an early report on our new metadocument authoring and publishing capability, and some of its potential uses. Saved metadocuments can be published on the web. Once published, they can serve both as static navigable metadocuments, and as the jumping off point for further evolutionary browsing of the information space represented by the collected elements.
|15||Multi-Layered Cross-Media Linking ||Beat Signer, Moira C. Norrie |
Short paper: Linking, Navigation
The integration of printed paper and digital information enables new forms of enhanced reading. We present digitally augmented paper as a specific application of our more general Integration Server (iServer) architecture for cross-media information management. Multi-layered linking is introduced as a way to manage the granularity of link anchors and an application making active use of multi-layered links is presented. Furthermore, we point out how the concept of supporting multiple layers in link management can be applied to other media such as, for example, XHTML in combination with the XML Linking Language (XLink).
|Session 6: (Thursday am) Hypermedia Creation|
|16||Decentering the Dancing Text: from dance intertext to hypertext ||Tim Miles-Board, Deveril, Janet Lansdale, Leslie Carr, Wendy Hall |
Hypertext Theory, Hypertext Annotation, Hypertext Structure, Dance, Performance, Intertextuality
This paper explains and draws together two projects from different disciplines: dance studies and hypertext writing. Each project sets out to examine the processes and practices of hypertextuality, and to develop new ways of writing using electronic technology and the Internet. The dance studies project seeks to link the critical theory of intertextuality (as a means of dance interpretation) with the theoretical and practical concerns of hypertextuality. It hopes to show a convergence of the two into a working system for analysing dance in a network of people, institutions and information. The Associative Writing Framework (AWF) project seeks to explore how writers could best be supported in representing and exploring hypertextuality in a Web environment, and producing new hypertexts which integrate or "glue together" existing Web resources (ideas, concepts, data, descriptions, experiences, claims, theories, suggestions, reports (etc.). Following the combining of the two projects we report on some initial evaluation of the AWF system by dance experts, and there is a discussion of where the relationship might lead and the potential outcomes of the collaboration.
|17||Simplifying Annotation Support for Real-World-Settings – a Comparative Study of Active Reading ||Hartmut Obendorf |
Short paper: Hypertext Annotation, Evaluation, Metadata, Active Reading
Despite the multitude of existing interfaces for annotation, little is known about the their influence on the created annotations. In this paper, first findings of a comparative video-supported study of active reading are presented. The support for active reading offered by traditional paper-and-pencil vs. two existing annotation tools for the World Wide Web is examined and possible implications for anno-tation systems are drawn. An immediate conclusion is the existence of a strong need for simplicity, and the importance of generic tools that can be adapted to the user’s task at hand.
|18||Collage, Composites, Construction ||Mark Bernstein |
Short paper: Dynamic Linking, World Wide Web, Software Agents, Linking, Spatial Hypertext
Tinderbox, a hypertext tool for making, analyzing, and sharing notes, explores the use of collage to build and share linked conceptual structures. Adopting a simple, regular data structure that exploits prototype inheritance and transclusion, Tinderbox helps build malleable, personal documents that are partially self-organizing.
|19||Combining Spatial and Navigational Structure in the Hyper-Hitchcock Hypervideo Editor ||Frank Shipman, Andreas Girgensohn, Lynn Wilcox |
Short paper: Hypertext Structure, Linking, Metadata, Spatial hypertext, Hypervideo, Interactive Video
Existing hypertext systems have emphasized either navigational or spatial expression of relationships between information objects. We are exploring the combination of these modes of expression in Hyper-Hitchcock, a hypervideo editor. Hyper-Hitchcock supports a form of hypervideo called “detail-on-demand video” due to its applicability to situations where viewers need to take a link to view more details on the content currently being presented. Authors of detail-on-demand video select, group, and spatially arrange video clips into linear sequences in a two-dimensional workspace. Hyper-Hitchcock uses a simple spatial parser to determine the temporal order of selected video clips. Authors add navigational links between the elements in those sequences. This combination of navigational and spatial hypertext modes of expression separates the clip sequence from the navigational structure of the hypervideo. Such a combination can be useful in cases where multiple forms of inter-object relationships must be expressed on the same content.
|20||Paper chase revisited --- a real world game meets hypermedia ||Susanne Boll, Jens Kroesche, Christian Wegener |
Short paper: Navigation, Interactive games and entertainment; geo-referenced hypermedia documents; location-aware mobile games
In this short paper, we present a location aware mobile game which lets user`s play a paper chase game on a mobile device. By using their physical movement and location in the real world`s space the players navigate in the virtual paper chase game and solve riddles on their way. The game is realized as a hypermedia document in which geo-referenced hyperlinks on a map lead to the hypermedia documents that form the riddles that are to be solved at the different physical checkpoints. Traversal of the document is carried out by physical movement/approaching of the GPS-located player achieving a spatial navigation to the checkpoints of the game. The current state of the teams is tracked and monitored by the game server. The game is realized with wireless handheld devices together with GPS receivers in a wireless communication net over Web infrastructure.
|Session 7: (Friday am) Hypermedia Systems|
|21||IUHM, a hypermedia-based model for integrating open services, data and metadata |
Engelbart award winner
|Nanard Jocelyne, Nanard Marc, King Peter |
Hypertext Structure, Open Hypermedia, Semantics, Metadata, Service integration
This paper discusses a new hypermedia-based model known as IUHM. IUHM emerged as a result of the development of the OPALES system, a collaborative environment for exploring and indexing video archives in a digital library. A basic design requirement of OPALES is that it must permit and support the integration of new services throughout its life cycle. Thus, IUHM depends heavily upon the notions of extensibility and openness. Support for openness, extensibility and late binding of services is provided in the IUHM model by a single reflexive mechanism. This uniform mechanism is used for describing all relationships between arbitrary system entities, including services, data and metadata. The mechanism in question consists of a generic, computable hypertext structure with typed links, known as the Information Unit, and is the minimal structural scheme to which all encapsulated entities comply. We describe and justify the design of the Information Unit, as well as the semantics of its four link types, namely role, type, owner, relative. We further describe the minimal kernel of the runtime layer responsible for the dynamic behaviour specified by the IUHM compliant hypertext network. We discuss the mechanisms involved in the dynamic binding of services and service composition. We illustrate these notions by real-world examples of the integration of metadata services within the OPALES system.
|22||Structure and Behavior Awareness in Themis ||Kenneth M. Anderson, Susanne A. Sherba, William V. Lepthien |
Open Hypermedia, structural computing, structure, behavior, awareness
The field of structural computing is working to produce techniques and tools to ease the task of developing application infrastructure--infrastructure that provides common services such as persistence, naming, distribution, navigational hypermedia, etc., over a set of application-specific or domain-specific structures. Within structural computing, "structure" refers to a combination of data and relationships over that data. Structure servers support the specification of structure and the manipulation of structures with behaviors (operations). One important aspect of structural computing is the power and flexibility it provides application developers in constructing new applications. We believe a large part of this power is due to structural computing`s ability to provide awareness services over both structure and behavior. The paper provides a definition of awareness services and describes the awareness services provided by the Themis structural computing environment. We evaluate the utility of these services by discussing how they are used within the InfiniTe information integration environment. The paper concludes with a discussion of what these services mean to the open hypermedia field (the field which gave rise to structural computing) and how they might influence the development of new hypermedia services.
|23||Increasing the Usage of Open Hypermedia Systems: A Developer-Side Approach ||Nikos Karousos, Manolis Tzagarakis, Ippokratis Pandis |
Short paper: Open Hypermedia, Hypermedia Services, Service Discovery, Web Services, Developer Support
This paper argues that the existence of a developer support framework is a critical issue to the usage of Open Hypermedia Systems (OHSs). For this reason, the OHS Community would benefit by the adoption of both a service discovery mechanism and a set of standards and tools to approach the development of hypermedia clients in a transparent and methodological manner.
|24||Storm: Using P2P to make the desktop part of the Web ||Benja Fallenstein, Tuomas J. Lukka, Hermanni Hyytiälä, Toni Alatalo |
Short paper: dangling links, peer-to-peer, location-independent identifiers, content addressable networks
We present Storm, a storage system which unifies the desktop and the public network, making Web links between desktop documents more practical. Storm assigns each document a permanent unique URI when it is created. Using peer-to-peer technology, we can locate documents even though our URIs do not include location information. Links continue to work unchanged when documents are emailed or published on the network. We have extended KDE to understand Storm URIs. Other systems such as GNU Emacs are able to use Storm through an HTTP gateway.
|Session 8: (Friday am) Adaptive Hypermedia (2)|
|25||User-Controlled Link Adaptation ||Theophanis Tsandilas, m.c. schraefel |
Adaptive Hypermedia, Linking, Navigation, Link Analysis, adaptable hypermedia, history visualization, direct manipulation
This paper introduces an adaptable hypermedia approach applied to adaptive link annotation techniques. The combination of such direct manipulation with automated link annotation affords greater user control over page adaptation. In turn, this direct control better supports user focus in information discovery tasks. Unlike adaptive-only systems, our approach lets users both define multiple topics of interest and then manipulate how these topics` associated links are presented in a page. We discuss how the approach can be applied both to pages viewed as well as to the user`s history list, thereby relieving users from the task of either adding to or organizing bookmarks. We describe the prototype developed to support these manipulations, as well as the adaptive architecture developed to support these controls.
|26||AHA! meets Auld Linky : Integrating Designed and Free-form Hypertext Systems ||David Millard, Koen O. Aben, Hugh Davis, Paul De Bra, Mark Weal |
Adaptive Hypermedia, Open Hypermedia, Hypertext Theory, Fundamental Open Hypermedia Model (FOHM)
In this paper we present our efforts to integrate two adaptive hypermedia systems that take very different approaches. The Adaptive Hypermedia Architecture (AHA!) aims to establish a consistently organized, strictly designed form of hypertext while Auld Linky takes an open and potentially sculptural approach, producing more freeform, less deterministic hypertexts. We describe the difficulties in reconciling the two approaches. This leads us to draw a number of conclusions about the benefits and disadvantages of both and the concessions that are required to combine them successfully.
|27||“Pluggable” user models for adaptive hypermedia in education ||M.R.Zakaria, A.Moore, C.D.Stewart, T.J. Brailsford |
Short paper: Adaptive Hypermedia
Most adaptive hypermedia systems used in education implement a single user model – inevitably originally designed for a specific set of circumstances. In this paper we describe an architecture that makes use of XML pipelines to facilitates the implementation of different user models
|28||Is Simple Sequencing Simple Adaptive Hypermedia? ||Nor Aniza Abdullah, Hugh Davis |
Short paper: Adaptive Hypermedia, Learning Objects, Simple Sequencing
In this paper, we explore the differences between the Adaptive Hypermedia and IMS Simple Sequencing approaches. Both approaches provide learning material tailored for the learner’s current context. Understanding the difference between the approaches enables us to identify the best features of each, and thus to identify research agendas for improvement of adaptive hypermedia and of Web-based Learning Management Systems.
|Session 9: (Friday pm) Web Engineering|
|29||Do Adaptation Rules Improve Web Cost Estimation? ||Emilia Mendes, Steve Counsell, Nile Mosley |
Evaluation, World Wide Web, Web cost estimation, Web measurement, Web metrics, case-based reasoning
Analogy-based estimation has, over the last 15 years, and particularly over the last 7 years, emerged as a promising approach with comparable accuracy to, or better than, algorithmic methods in some studies. In addition, it is potentially easier to understand and apply; these two important factors can contribute to the successful adoption of estimation methods within Web development Companies We believe therefore, analogy-based estimation should be examined further. This paper compares several methods of analogy-based effort estimation. In particular, it investigates the use of adaptation rules as a contributing factor to better estimation accuracy. Two datasets are used in the analysis; results show that the best predictions are obtained for the dataset that first, presents a continuous cost function, translated as a strong linear relationship between size and effort, and second, is more unspoiled in terms of outliers and collinearity. Only one of the two types of adaptation rules employed generated good predictions.
|30||A Visual Environment for Dynamic Web Application Composition |
Nelson award candidate
|Kimihito Ito, Yuzuru Tanaka |
Navigation, Dynamic Linking, World Wide Web, Web Application Linkage, Web Application Wrapping
HTML-based interface technologies enable end-users to easily use various remote Web applications. However, it is difficult for end-users to compose new integrated tools of both existing Web applications and legacy local applications such as spreadsheets, chart tools and database. In this paper, the authors propose a novel framework where end-users can wrap remote Web applications into visual components called pads, and functionally combine them together through drag&drop-paste operations. We use, as the basis, a meme media architecture IntelligentPad that was developed by our research group. In the IntelligentPad architecture, each visual component called a pad has slots as data I/O ports. By pasting a pad onto another pad users can integrate their functionalities. Users can visually create wrapper pads for Web applications that he wants to use by defining HTML nodes within the Web application to work as slots. Examples of such a node include input-forms and text strings on the Web page. Users can directly manipulate both wrapped Web applications and wrapped local legacy tools on their desktop screen to define application linkages among them. Since no programming expertise is required to wrap Web applications or to functionally combine them together, end-users can build new integrated tools of both wrapped Web applications and local legacy applications.
|31||Configuration Management in a Hypermedia-based Software Development Environment ||Tien N. Nguyen, Ethan V. Munson, John T. Boyland |
Short paper: Versioning, Software Agents, Hypertext Structure, Versioned Hypermedia, Configuration Management, Software Development Environment
Several researchers have explored the use of hypermedia technology in software development environments (SDEs). However, existing hypermedia-based SDEs have only limited support for the evolutionary aspects of software projects. On the other hand, commercial software configuration management systems (SCMs) have had noticeable success in helping developers manage system evolution. While researchers in the hypermedia community acknowledged the need for strong version control support in their systems, they are still far from achieving this goal. The Software Concordance (SC) project is developing an SDE to experiment with the use of versioned hypermedia services for managing software documents and their logical relationships. This paper describes our versioned hypermedia framework in which hypermedia services are built on top of a SCM system and provides uniform version control supports for both software documents and their logical relationships.
|32||A Cooperative Hypermedia Solution to Work Management in Real-time Enterprises ||Weigang Wang, Frank Lillehagen |
Short paper: Cooperative hypermedia, real-time enterprise, work management
Many ERP and project management systems are geared for monthly planning and analysis. Often, managers could not see what was going on in their businesses until it was too late to react. Real-time enterprises are emerging forms of agile organizations that can detect delays and respond fast. To meet the challenges met in supporting the emerging real-time enterprises, in this work, multiple complementary hypermedia services are developed in a cooperative hypermedia environment to support distributed project teams to create and modify a project plan cooperatively, to carry out the plan, and, more importantly, to monitor, analysis and adapt to changes in real time.
|(Session 10) Links for a Better Web|
|33||Refinement of TF-IDF Schemes for Web Pages using their Hyperlinked Neighboring Pages |
Nelson award candidate
|Kazunari Sugiyama, Kenji Hatano, Masatoshi Yoshikawa, Shunsuke Uemura |
Evaluation, Hypertext Structure, Link Analysis, Search Engines, World Wide Web, WWW, Information retrieval, TF-IDF scheme
In IR (Information Retrieval) systems based on the vector space model, the tf-idf scheme is widely used to characterize documents. However, in the case of documents with hyperlink structures such as Web pages, we believe that a technique for representing the contents of Web pages more accurately is required by exploiting the contents of their hyperlinked neighboring pages. In this paper, we first propose three methods for refining the tf-idf scheme for a target Web page by using the contents of its hyperlinked neighboring pages, and then compare retrieval accuracy of our proposed methods. Experimental results show that, generally, more accurate feature vectors of a target Web page can be generated in the case of utilizing the contents of its hyperlinked neighboring pages at levels up to second in the backward direction from the target page.
|34||Enhanced Web Document Summarization Using Hyperlinks ||Jean-Yves Delort, Bernadette Bouchon-Meunier, Maria Rifqi |
Data Mining, Hypertext Structure, World Wide Web, Evaluation, Link Analysis
This paper addresses the issue of Web document summarization. As textual content of Web documents is often scarce or irrelevant and existing summarization techniques are based on it, many Web pages and websites cannot be suitably summarized. We consider the context of a Web document by the textual content of all the documents linking to it. To summarize a target Web document, a context-based summarizer has to perform a preprocessing task, during which it will be decided which pieces of information in the source documents are relevant to the content of the target. Then a context-based summarizer faces two issues: first, the selected elements may partially deal with the topic of the target, second they may be related to the target and yet not contain any cues about the content of the target. ewline In this paper we put forward two new summarization by context algorithms. The first one uses both the content and the context of the document and the second one is based only on the elements of the context. It is shown that summaries taking into account the context are usually much more relevant than those made only from the content of the target document. Optimal conditions of the proposed algorithms with respect to the sizes of the content and the context of the document to summarize are studied.
|35||Link analysis for collobrative knowledge building ||Harris Wu, Michael D. Gordon, Kurt Demaggd |
Short paper: Data Mining, Link Analysis, Link Analysis, Navigation, Dynamic Linking
We present a research project utilizing navigation and hyperlink data to aid collaborative knowledge building. We allow collaborators to personally organize documents and other research resources and make references to them. We combine their personal organizations and references to develop a unified, hierarchical categorization of these resources. We analyze collaborators’ navigations to identify prominent research activities as well as the key documents related to these activities. We examine prominence over time to identify research trends.
|36||“Common” Web Paths in a Group Adaptive System ||Maria Barra, Delfina Malandrino, Vittorio Scarano |
Short paper: Adaptive Hypermedia, Link Analysis, Recommendation Systems
In this paper we describe the how we use users’ accesses and interactions on pages to discover and recommend relevant Common Paths to a group of users. We collect data using a social navigation environment GAS (Group Adaptive System) we developed  and we are currently integrating the “Common Path” navigation tool in the user interface. The goal is to use the Common Path of a subset of users in the system as a recommendation to a user (not in the subset)