Catálogo de publicaciones - libros
Ambient Intelligence for Scientific Discovery: Foundations, Theories, and Systems
Yang Cai (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Information Systems Applications (incl.Internet); User Interfaces and Human Computer Interaction; Database Management; Computer Graphics; Computer Communication Networks
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2005 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-24466-0
ISBN electrónico
978-3-540-32263-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2005
Información sobre derechos de publicación
© Springer-Verlag Berlin/Heidelberg 2005
Tabla de contenidos
Science at the Speed of Thought
Judith E. Devaney; S. G. Satterfield; J. G. Hagedorn; J. T. Kelso; A. P. Peskin; W. L. George; T. J. Griffin; H. K. Hung; R. D. Kriz
Scientific discoveries occur with iterations of theory, experiment, and analysis. But the methods that scientists use to go about their work are changing [1].
Experiment types are changing. Increasingly, experiment means computational experiment [2], as computers increase in speed, memory, and parallel processing capability. Laboratory experiments are becoming parallel as combinatorial experiments become more common.
Acquired datasets are changing. Both computer and laboratory experiments can produce large quantities of data where the time to analyze data can exceed the time to generate it. Data from experiments can come in surges where the analysis of each set determines the direction of the next experiments. The data generated by experiments may also be non-intuitive. For example, nanoscience is the study of materials whose properties may change greatly as their size is reduced [3]. Thus analyses may benefit from new ways to examine and interact with data.
I - New Paradigms in Scientific Discovery | Pp. 1-24
Computational Biology and Language
Madhavi Ganapathiraju; Narayanas Balakrishnan; Raj Reddy; Judith Klein-Seetharaman
Current scientific research is characterized by increasing specialization, accumulating knowledge at a high speed due to parallel advances in a multitude of sub-disciplines. Recent estimates suggest that human knowledge doubles every two to three years – and with the advances in information and communication technologies, this wide body of scientific knowledge is available to anyone, anywhere, anytime. This may also be referred to as ambient intelligence – an environment characterized by plentiful and available knowledge. The bottleneck in utilizing this knowledge for specific applications is not accessing but assimilating the information and transforming it to suit the needs for a specific application. The increasingly specialized areas of scientific research often have the common goal of converting data into insight allowing the identification of solutions to scientific problems. Due to this common goal, there are strong parallels between different areas of applications that can be exploited and used to cross-fertilize different disciplines. For example, the same fundamental statistical methods are used extensively in speech and language processing, in materials science applications, in visual processing and in biomedicine. Each sub-discipline has found its own specialized methodologies making these statistical methods successful to the given application. The unification of specialized areas is possible because many different problems can share strong analogies, making the theories developed for one problem applicable to other areas of research. It is the goal of this paper to demonstrate the utility of merging two disparate areas of applications to advance scientific research. The merging process requires cross-disciplinary collaboration to allow maximal exploitation of advances in one sub-discipline for that of another. We will demonstrate this general concept with the specific example of merging language technologies and computational biology.
I - New Paradigms in Scientific Discovery | Pp. 25-47
Interactive Comprehensible Data Mining
Andy Pryke; Russell Beale
In data mining, or knowledge discovery, we are essentially faced with a mass of data that we are trying to make sense of. We are looking for something “interesting”. Quite what “interesting” means is hard to define, however – one day it is the general trend that most of the data follows that we are intrigued by – the next it is why there are a few outliers to that trend. In order for a data mining to be generically useful to us, it must therefore have some way in which we can indicate what is interesting and what is not, and for that to be dynamic and changeable.
I - New Paradigms in Scientific Discovery | Pp. 48-65
Scientific Discovery Within Data Streams
Andrew J. Cowell; Sue Havre; Richard May; Antonio Sanfilippo
The term ‘data-stream’ is an increasingly overloaded expression. It often means different things to different people, depending on domain, usage or operation. Harold (2003) draws the following analogy:
For our purposes we define a data-stream as a series of data (e.g. credit card transactions arriving at a clearing office, cellular phone traffic or environmental data from satellites) arriving in real time, that have an initiation, a continuous ingest of data, but with no expectations on the amount, length, or end of the data flow. The data stream does not have a database or repository as an intrinsic part of its definition–it is a ‘one-look’ opportunity from the perspective of data stream analytics. We call each data element in the stream a token and the complexity of these tokens ranges from simple (e.g. characters in a sentence: “T H I S I S A S T R E A M...”) to extremely complex (e.g. a detailed transaction record). The volume of data-streams is usually massive, and while each individual token may be rather uninformative, taken as a whole they describe the nature of the changing phenomena over time.
I - New Paradigms in Scientific Discovery | Pp. 66-80
Shape as Memory Storage
Michael Leyton
In a sequence of books, I have developed that are directly opposed to the foundations to geometry that have existed from Euclid to modern physics, including Einstein. The central proposal of the new foundations is this:
≡
Let us see how this contrasts with the standard foundations for geometry that have existed for almost three thousand years. In the standard foundations, a geometric object consists of those properties of a figure that do not change under a set of actions. These properties are called the of the actions. Geometry began with the study of invariance, in the form of Euclid’s concern with , which is really a concern with invariance (properties that do not change). And modern physics is based on invariance. For example, Einstein’s principle of relativity states that physics is the study of those properties that are invariant (unchanged) under transformations between observers. Quantum mechanics studies the invariants of measurement operators.
My argument is that the problem with invariants is that they are . That is, if a property is invariant (unchanged) under an action, then one cannot infer from the property that the action has taken place. Thus I argue: . In consequence, I conclude that geometry, from Euclid to Einstein has been concerned with . In fact, since standard geometry tries to maximize the discovery of invariants, it is essentially trying to maximize memorylessness. My argument is that these foundations to geometry are inappropriate to the age; e.g., people want computers that have greater memory storage, not less.
As a consequence, I embarked on a 30-year project to build up an entirely new system for geometry – a system that was recently completed. Rather than basing geometry on the (the aim from Euclid to Einstein), I base geometry on the . The result is a system that is profoundly different, both on a conceptual level and on a detailed mathematical level. The conceptual structure is elaborated in my book (MIT Press, 630 pages); and the mathematical structure is elaborated in my book (Springer-Verlag, 550 pages).
II - Ambient Cognition | Pp. 81-103
Spatial Cues in 3D Visualization
Geoffrey S. Hubona; Gregory W. Shirah
The ever-increasing power and complexity of available hardware and software has enabled the development of a wide variety of visualization techniques that allow the ever more concise presentation of data. Associated with this trend is the challenge to condense and convey ever-increasing amounts of useful information into smaller and smaller spaces. Depicting computer-generated visualizations in three dimensions (3D), similar to how we perceive the real world, is one approach to condense these visual presentations of information. However, there is an inherent dilemma in this approach; the visual medium on which the vast majority of 3D imagery is displayed is inherently two dimensional (2D): a flat computer monitor. Although there are some immersive or ‘true 3D devices’ on the market, such as the fishbow rotating display, the LCD layered 3D monitor, and 3D displays marketed by companies such as SeeReal Technologies, most of these devices are either too small or too expensive for the average user.
II - Ambient Cognition | Pp. 104-128
Textual Genre Analysis and Identification
David Kaufer; Cheryl Geisler; Suguru Ishizaki; Pantelis Vlachos
This chapter reports on a research program that investigates language and text from a rhetorical point of view. By rhetorical, we mean an approach that features the relationship between the speaker and the audience or between the writer and the reader. Fundamental to a rhetorical approach to language is an interest in linguistic and textual agency, how speakers and writers manage to use language strategically to affect audiences; and how audiences and readers, agents in their own right, manage, or not, to pick up on, register, and respond to a speaker or writer’s bids. Historical and cultural factors play a central role in how speakers and writer settle into agent roles vis-à-vis listeners and readers. It is therefore no surprise that rhetorical approaches to language treat language, culture, and history as deeply permeable with one another. Rhetorical approaches to language have, since ancient Greece, been the dominant approach for educating language-users in the western educational curriculum [1].
II - Ambient Cognition | Pp. 129-151
Cognitive Artifacts in Complex Work
Peter H. Jones; Christopher P. Nemeth
The Indian folk tale recorded in the well-known John Saxe poem tells of six blind men, each grabbing a different part of an elephant, and describing their impression of the whole beast from a single part’s perspective. So the elephant appears to each blind man to be like a snake, a fan, a tree, a rope, a wall, a spear. As the poem concludes:
“And so these men of Indostan, Disputed loud and long, Each in his own opinion, exceeding stiff and strong. Though each was partly right, All were in the wrong.”
Although this tale suggests a general metaphor for poor collaboration and social coordination, the insinuation of indicates an inability to share the common information that is normally available through visual perception. When fundamental cognitive resources such as shared information or visual cues are missing, collaborative work practices may suffer from the “anti-cognition” suggested by the elephant metaphor. When individuals believe they are contributing to the whole, but are unable to verify the models that are held by other participants, continued progress might founder. We may find such “blind men” situations when organizations value and prefer independent individual cognition at the expense of supporting whole system coordination. Blindness to shared effects is practically ensured when those who work together are not able to share information.
II - Ambient Cognition | Pp. 152-183
Multi-modal Interaction in Biomedicine
Elena V. Zudilova; Peter M. A. Sloot
Everybody agrees that user tasks and preferences should play a central role in the design and development of applications oriented to non-computer experts. Nevertheless, even biomedical applications are sometimes developed in a relative vacuum from the real needs of end-users and environments where they are supposed to be used.
To provide a clinician with an intuitive environment to solve a target class of problems, a biomedical application has to be built in such a way that a user can exploit modern technologies without specialized knowledge of underlying hardware and software [18]. Unfortunately, in reality the situation is different. Many developers do not take into account the fact that their potential users are people, who are mostly inexperienced computer users, and as a result they need intuitive interaction capabilities and a relevant feedback adapted to their knowledge and skills.
User comfort is very important for the success of any software application [13]. But very often we forget that usability problems may arise not only from an ‘uncomfortable’ graphical user interface (GUI), but also from a projection modality chosen incorrectly for deploying an interactive environment [16].
Existing projection modalities have not been sufficiently investigated yet in respect to usability factors. Meanwhile, the selection of an appropriate projection modality in accordance with the user’s tasks, preferences and personal features might help in building a motivated environment for biomedical purposes. In this chapter we summarize our recent findings related to this research and introduce a new concept of multi-modal interaction based on the combination of virtual reality (VR) and desktop projection modalities within the same system. For the case study of the research we used a biomedical application simulating vascular reconstruction [2,22].
III - Ambient Intelligence Systems | Pp. 184-201
Continuous Body Monitoring
Jonathan Farringdon; Sarah Nashold
When a person who usually wears a watch forgets to put it on one day, it is common for them to look at their wrist anyway expecting the watch to be there. Without looking or touching to check, they are not aware of the watch’s presence or lack thereof. The watch becomes a part of their expected experience due to its comfort and continuous utility. It satisfies Thad Starner’s definition of a wearable system as “always with you, always on, and always accessible” [22]. At another end of the ambient intelligence spectrum the design vision of Stefano Marzano is one where the “ ‘relationship’ between us and the technology around us will be of utmost importance. This relationship will no longer be one of user towards machine but of person towards ‘object-become-subject’, thus towards something that is capable of reacting, of being educated and responding [18].” Amongst the myriads of applications envisioned in an “ambient culture” by Stefano’s team are the person, their clothing, their home and furniture, and an amenable outside world [17].
III - Ambient Intelligence Systems | Pp. 202-223