The presentation highlights structural differences between three hypertext authoring systems created for the personal computer: HyperCard (1987-1998), created by Apple's Bill Atkinson and bundled free on Macintosh computers; Hypergate (1987-1991), developed by Mark Bernstein of Eastgate Systems, Inc.; and Storyspace (1987-present), produced originally by Michael Joyce, Jay David Bolter, and John B. Smith and licensed to Eastgate Systems, Inc., in 1990.
With the introduction of powerful VR in the form of the Vision Pro next year, 40 years after the Macintosh, we will see the introduction of potentially ground-breaking means through which we can interact with our information and each other, to truly augment our mental abilities. However, I am concerned that we are wasting this historic opportunity. We think we ‘know’ what digital text is, we think digital text is what we use in word processing applications, email, messages, spreadsheets, the web and that's pretty much it. We ‘know’ what digital text is, so we stop exploring what it can be. Doug Engelbart remarked that dreaming is hard work. He was so right, and I believe that we need to be shaken out of our paradigms to think anew. And I think that is–potentially–the most valuable thing being at the cusp of a headset world gives us; the opportunity to think anew, to inspire new and more open thinking about what interactivity is, what augmentation can do, how we can view our information, how we can connect, how we can become better, more connected, humans. There are some very basic issues we need to address before we can truly say we are opening up the vistas available to us. If we as a community do not look into this deeply, I think we are headed for VR experiences as boxed-in as the CDs and DVDs of yesteryear. If we do not, as a community, really consider these issues and delegate them to the large tech companies who makes the equipment, we will not own the future, we will not own the potential, they will, and their motivations are not the same as ours. A scenario. I open a book while wearing a headset–it doesn't matter if I'm in VR or AR mode–and it floats in front of me, at a comfortable reading distance and angle. So far so nice and simple. I then make a gesture and all the images; the photographs, charts, graphs and tables flow out of the book and onto the wall at the back of the room. There is also a beautiful 3D model encoded in one of the pages. In the traditional, flat, version of the book, this model appears as a still image and the metadata is recorded at the back of the book for situations like this. I take this model out and put it right in front of me so that I can look at it properly and interact with it. I also decide to take the Reference section out of the back of the book an place it on the side, over here, and from that I can easily summon the sources and see relationships between them. This sounds like a very interesting environment for Spatial Hypertext right? Maybe we should call it something new, since it is fully dimensional, not flattened. Apple has introduced the name Spatial Computing with the Vision Pro, so how about Spatial Computing Hypertext? That is only intended as a provocation, not as a serious suggestion. We just need to think about what this space is. This space is not cyberspace, it is a visual environment on a human scale. Headset computing is built for humans, it is not built for machine-machine interaction or offloading thinking, which, in contrast, much of AI is. It therefore needs to fit us, like no technology has needed to before, if it is to extend us.
Determining when instructor intervention is needed, based on learners’ comments and their urgency in massive open online course (MOOC) environments, is a known challenge. To solve this challenge, prior art used autonomous machine learning (ML) models. These models are described as having a "black-box" nature, and their output is incomprehensible to humans. This paper shows how to apply eXplainable Artificial Intelligence (XAI) techniques to interpret a MOOC intervention model for urgent comments detection. As comments were selected from the MOOC course and annotated using human experts, we additionally study the confidence between annotators (annotator agreement confidence), versus an estimate of the class score of making a decision via ML, to support intervention decision. Serendipitously, we show, for the first time, that XAI can be further used to support annotators creating high-quality, gold standard datasets for urgent intervention.
We are entering a period of unprecedented collaboration between authors and computers, where artificial intelligence in particular seems likely to act increasingly in a co-authoring capacity. Automated or procedural storytelling represents one exciting avenue of research. By entering prompts and parameters into an AI text generator like ChatGPT, authors could leverage an enormous textual corpus to generate a “new” work that appears to have been authored by a human.
This paper proposes an alternative platform, one more reflective of the collaborative and organic creative process. Approached as a tool for augmentation, Mother showcases the potential for spatial hypertext to work alongside the author.
We trace the development of our ideas about how hypertext can uniquely support galleries, libraries, archives and museums laying out the vision of our Touch Archive project. Our first steps arose from interviews with curators about what they wanted and needed in improving and expanding the types of their exhibits.
Now we consider what is needed to instantiate the new techniques and the vaster challenges that remain in this area.
Systems supporting argumentation exist for a long time and have appeared with many different forms, ranging from online forums, to mind maps and decision support systems. Such kind of systems are used on a daily basis by teams that collaborate in data intensive and cognitive complex settings, such as for example teams involved in DNA analysis, marketing or drug testing research. In these settings, teams collect big amounts of required data as well as use sophisticated data mining tools to uncover patterns in the collected data. Current argumentation support systems provide little care for integrating data mining tools required by these teams. In this paper, we present an approach that integrates argumentation support systems with data mining services to augment collaboration and decision making in the above teams. Specifically, the proposed approach allows data mining services and their outcomes to be meaningfully used in argumentative discourses. Such integration enables the contextualization of these services, their execution and their respective results by the argumentative discourse, hence greatly facilitating its monitoring and understanding. Moreover, the paper presents how the proposed argumentation system has been developed based on a Component-Based Open Hypermedia System. The overall aim of these efforts is to show that CB-OHSs are an appropriate architecture to develop complex structuring paradigms in the field of argumentation especially in situations that require a synergy between human and machine intelligence.