Long before the advent of personal computing, Vannevar Bush envisioned the Memex as a solution to address information overload by enhancing the management and refinding of information through associative trails. While other hypertext pioneers like Douglas Engelbart and Ted Nelson introduced advanced hypertext concepts to create more flexible document structures and augment the human intellect, some of their original ideas are still absent in our daily interaction with documents and information systems. Today, many digital document formats mimic paper documents without fully leveraging the opportunities offered by digital media and documents are often organised in hierarchical file structures. In this keynote, we explore how cross-media technologies, such as the resource-selector-link (RSL) hypermedia metamodel, can be used to organise and interact with information across digital and physical spaces. While emerging wearable mixed reality (MR) headsets offer new possibilities to augment the human intellect, we discuss how hypermedia research, in combination with other technologies, could play a major role in providing the necessary linked data and hypertext infrastructure for this augmentation process. We outline the challenges and opportunities for next-generation multimodal human-information interaction enabled by flexible cross-media information spaces and document structures in combination with upcoming mixed and virtual reality solutions.
The paper discusses the concept of media translation, a form of enhanced translation that goes beyond the linguistic. The case studies used as examples are drawn from reconstruction work undertaken in our labs, notably the migrations of Richard Holeton’s Figurski at Findhorn on Acid and Michael Joyce’s Twilight, A Sympathy from the Storyspace platform to open Web languages, as well as the reconstruction of Christy Sheffield Sanford’s Red Mona from an unsupported programming language to one that is compatible with contemporary browsers. We focus our attention on selected pre-Web features of born-digital literature––that is, the loading screen, multilink, Tinker and Bell Keys, and link names and paths.
Data stories are about revealing and communicating insights from complex data. In this paper, we propose conversational data stories, which support end users in understanding the key findings of the data analysis at hand by natural language conversation. Creating these stories manually means to put a lot of effort into understanding the data and crafting visuals. With increasingly powerful generative large language models (LLMs), natural language processing as well as automating the creation of data stories is a promising field. We present a concept for a conversational data storytelling system that integrates LLMs as well as explainable AI. We present the collected requirements for our system concept and how the requirements are addressed. To show the potential of our approach, we provide a use case scenario and a discussion in this paper. This is supposed to serve as a basis for future research that will aim at investigating the technical reliability and the user experience of such a system.
Traditional spatial hypertext systems, predominantly limited to two-dimensional (2D) interfaces, offer limited support for addressing long debated inherent problems such as orientation difficulties and navigation in large information spaces. In this context, we present opportunities from interdisciplinary fields such as immersive analytics (IA) and embodied cognition that may mitigate some of these challenges. However, while some research has explored the extension of spatial hypertext to three dimensions, there is a lack of discussion on recent advances in virtual reality technologies and related fields, and their potential impact on immersive spatial hypertext systems. This paper addresses this gap by exploring the integration of immersive technologies into spatial hypertext systems, proposing a novel approach to enhance user engagement and comprehension through three-dimensional (3D) environments and multisensory interaction.
This paper explores the integration of hypertext structures within Virtual Reality (VR) environments, differentiating between two distinct design philosophies: VR as a native framework for 3D embodiment-enabled spaces similar to traditional 2D spatial hypertext, and utilizing hypertext to enhance VR experiences. Focusing on the latter approach, we propose an abstract knowledge layer that bridges typical VR systems and human thinking, thus facilitating the integration of human cognitive capabilities. Finally, we explore ethical implications of VR systems that arise in the presented context and propose hypertext as a paradigm to address some of these concerns.
If we make text more interactive it can augment how we think and communicate and to enable this we need better data and metadata. To better understand this and to introduce this perspective to larger groups I have developed Author & Reader for macOS, iOS and visionOS and I am Co-Pi with Dene Grigar on the Alfred P. Sloan Foundation supported Future Text Lab1 where we experiment to experience this in Extended Reality space (XR/VR/AR). The research is to support high resolution thinking with text by providing more fine grained addressability, interactions and views of the user’s information. This has caused us to question the foundations of what documents are and what they can be as well as what interactions in a fully immersive space can be using our bodies and or voices. As well as being commercially available for macOS users, for the authoring and reading of academic papers, the Lab’s experimental experiences have been implemented and tested in XR to give users richer control over how they read and interact with papers in Proceedings and Journals, with a minimal cost to authors, editors or publishers. Our aim is to augment interactions with knowledge primarily through text, primarily in XR, focused initially on academic users, enabling us to real the goal of enabling high resolution thinking to augment our intellect.
We are presenting our approach for interactive cultural heritage storytelling in WebXR. Therefore, we are describing our scenes’ structure consisting of (stylized) photospheres of the historic locations, 3D models of 3D-scanned historic artifacts and animated 2D textures of historic characters generated with a machine learning toolset. The result is a platform-independent web-application in an immersive interactive WebXR environment running in browsers on PCs, tablets, phones and XR headsets thanks to the underlying software based on the open-source framework A-Frame. Our paper describes the process, the results and the limitations in detail. The resulting application, designed for the Fichtelgebirge region in Upper Franconia, Germany, offers users an immersive digital time travel experience in the virtual space and within a museum setting connecting real artifacts and virtual stories.