Abstract:
As the use of distributed systems is spreading, and applications designed for such systems become more and more demanding, optimal design of distributed systems becomes a critical issue. Designing a distributed system has become more complex, due to the number of alternatives for each decision that must be made and because of the existence of many parameters which influence the overal performance of the distributed system. Thus, it is necessary to use software tools, capable of accepting a description of the user's requirements and suggesting solutions to the problem of designing a distributed system which meets the user's requirements. In this paper, we present a disciplined approach to the construction of such a software tool, which combines methods from the Artificial Intelligence domain, that are used in order to design the distributed system, along with simulation techniques, used to estimate the system's overall performance.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
sw-methodology-for-designing-dist-systems.pdf | 258.56 KB |
Abstract: In this study we present an Enhanced File System for a Distributed Unix Environment. The Enhanced Unix File System implements a flexible protection mechanism for files and directories. It is based on the concept of Access Control Lists (ACL), which allows different permissions for files and directories to be given to specific users. His work was developed under the SunOS/NFS distributed environment, using Remote Procedure Calls (RPC) to implement the communication between the client and the server. The system consists of a daemon process working as a server and a set of client processes that provide different file and directory services. There may be running many different servers and clients over the same network. The client processes transparently locate the appropriate server for each transaction. The system also provides an open programming environment for the development of new applications. Both the user and the programmer interface of the system are kept close to standard UNIX.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Contemporary networks accommodate handling of multiple priorities, aiming to provide suitable QoS levels to different traffic classes. In the presence of multiple priorities, a scheduling algorithm is employed to select each time the next packet to transmit over the data link. Class-based Weighted Fair Queuing (CBWFQ) scheduling and its variations, is widely used as a scheduling technique, since it is easy to implement and prevents the low-priority queues from being completely neglected during periods of high-priority traffic. By using this scheduling, low-priority queues have the opportunity to transmit packets even though the high-priority queues are not empty. In this paper, the modeling, analysis and performance evaluation of a single-buffered, dual priority multistage interconnection network (MIN) operating under the CBWFQ scheduling policy is presented. Performance evaluation is conducted through simulation, and the performance measures obtained can be valuable assets for MIN designers, in order to minimize the overall deployment costs and delivering efficient systems
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
j039.pdf | 526.64 KB |
Abstract:
Contemporary networks support multiple priorities, aiming to differentiate the QoS levels offered to individual traffic classes. Support for multiple priorities necessitates the introduction of a scheduling algorithm, to select each time the next packet to transmit over the data link. Class-based Weighted Fair Queuing (CBWFQ) scheduling and its variations, is widely used as a scheduling technique, since it is easy to implement and prevents the low-priority queues from starvation, i.e. receiving no service during periods of high-priority traffic. CBWFQ effectively thus offers low-priority queues the opportunity to transmit packets even though the high-priority queues are not empty. In this paper, we present the modeling and performance evaluation of a single-buffered, four-priority multistage interconnection network (MIN) operating under the CBWFQ scheduling policy. Performance evaluation is conducted through simulation, and the performance metrics obtained can be used by MIN designers to set the appropriate queue weights according to the expected traffic and the desired QoS levels for each priority class, delivering efficient thus systems.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
CBWFQ-4Classes-tr.pdf | 401.63 KB |
Abstract:
The forthcoming use of multimedia applications will require powerful computers and high performance networks. While great progress has been made to the physical layer of networks, the upper software layers of the OSI reference model have not kept up pace with them. This paper presents the demands imposed by multimedia applications on the underlying networks and their mapping to transport layer services, that protocols implementing it must provide. Four well-known transport layer protocols are briefly presented (TCP, TP4, VMTP, HSTP/XTP). The mechanisms employed by each one of those are studied and their suitability for demanding multimedia environments are evaluated. We conclude that much more effort needs to be made on the transport layer protocols so that high performance network architectures will be available to fullfill the diverse requirements of tomorrow's multimedia applications.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
comprarative-study-transport-protocols4multimedia.pdf | 228.98 KB |
Abstract:
Distributed software systems need to evolve according to the ever-changing requirements on which they were built. Software systems tailorability can be achieved in terms of component software. Atoms and molecules the basic constructs of the atoma framework, are the building blocks for distributed tailorable component-based software systems. These constructs can be considered as independent agents, that communicate in terms of, unanticipated, connections that are established at run-time, thus forming agent communities. System tailorability can take place at two levels. In high level tailorability whole parts of the functionality of a system, represented as agents, can be altered in order to provide new functionality. At a lower level, the tailorability of an agent itself, that is the tailorability of its functionality, is achieved through a flexible service mapping implementation for rule-based method invocation.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
distributed-sys-tailorability-component.pdf | 351.44 KB |
Abstract:
In this paper, a wireless multimedia traffic-oriented network scheme over a fourth generation
system (4-G) is presented and analyzed. We conducted an extensive evaluation study for various mobility
configurations in order to incorporate the behavior of the IEEE 802.11b standard over a test-bed wireless
multimedia network model. In this context, the Quality of Services (QoS) over this network is vital for
providing a reliable high-bandwidth platform for data-intensive sources like video streaming. Therefore, the
main issues concerned in terms of QoS were the metrics for bandwidth of both dropped and lost packets and
their mean packet delay under various traffic conditions. Finally, we used a generic distance-vector routing
protocol which was based on an implementation of Distributed Bellman-Ford algorithm. The performance of
the test-bed network model has been evaluated by using the simulation environment of NS-2.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
IECCS2007_0301.pdf | 216.27 KB |
Abstract:
In this paper the performance of asymmetric-sized finite-buffered Delta Networks with 2-class routing traffic is presented and analyzed in the uniform traffic conditions under various loads using simulations. We compared the performance of 2-class priority mechanism against the single priority one, by gathering metrics for the two most important network performance factors, namely packet throughput and delay. We also introduce and calculate a universal performance factor, which includes the importance aspect of each of the above main performance factors. We found that the use of asymmetric-sized buffered systems leads to better exploitation of network capacity, while the increments in delays can be tolerated. The goal of this paper is to help network designers in performance prediction before actual network implementation and in understanding the impact of each parameter factor.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Asymmetric_Sized_Delta_Networks-tr.pdf | 276.29 KB |
Wireless Local Area Networks (WLANs) have developed into a viable technology to support multimedia traffic and are expected to support multimedia services with guaranteed Quality of Service (QoS) for diverse traffic types (video, audio, and data). In this paper, we consider the incorporation of prediction into a generic distance-vector routing protocol for WLANs, evaluate the performance of the resulting routing scheme. Our study considers the enhancement of Distributed Bellman-Ford algorithm, which is a widely used algorithm, and assesses the effectiveness of the enhanced version on top of a fourth generation system (4-G). In order to compare the performance of the standard protocol against that of the prediction-enhanced version, we gather metrics for the two most important network performance factors, namely packet throughput and delay under different mobility and traffic conditions, using the simulation environment of NS-2. Both medium- and high-mobility configurations have been considered in this study.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
IJITS-tr.pdf | 300.24 KB |
In this paper, the modelling, analysis and performance evaluation of a novel architecture for internal priority finite-buffered Multistage Interconnection Networks (MINs) is presented. We model the proposed architecture giving the details of its operation and describing its states and detailing conditions and effects of state transition; we also provide a formal model for evaluating its performance. The proposed architecture’s performance is subsequently analyzed under the uniform traffic condition, considering various offered loads, buffer-lengths and MIN sizes, using simulations. We compare the internal priority scheme vs. the non priority (or single priority) scheme, by gathering metrics for the two most important network performance factors, namely packet throughput and the mean time a packet needs to traverse the network. We demonstrate and quantify the improvements on MIN performance stemming from the introduction of priorities in terms of throughput and a combined performance indicator which depicts the overall performance of the MIN. These performance measures can be valuable assets for designers of parallel multiprocessor systems and networks in order to minimize the overall deployment costs and delivering efficient systems.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
internal-priority-min-tr.pdf | 374.85 KB |
Abstract
Next-generation network architectures strive to achieve high bandwidth and ultralow latency for the packets traversing the offered end-to-end paths. Multistage Interconnection Networks (MINs) are often employed for implementing NGNs, but while MINs are fairly flexible in handling varieties of traffic loads, they tend to quickly saturate under broadcast and multicast traffic, especially at increasing size networks. As a response to this issue, multilayer MINs have been proposed, however their performance prediction and evaluation has not been studied sufficiently insofar. In this paper, we evaluate and discuss the performance of multilayer MINs under multicast traffic, considering also two levels of packet priorities, since support for multiple QoS levels is an indispensible requirement for NGNs. Different offered loads and buffer size configurations are examined in this context, and performance results are given for the two most important network performance factors, namely packet throughput and delay. We also introduce and calculate a universal performance fac¬tor, which includes the importance aspect of each of the above main performance factors. The findings of this study can be used by NGN system designers in order to predict the performance of each configuration and adjust the design of their communication infrastructure to the traffic requirements at hand.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
paper_multicast_priority-tr.pdf | 441.73 KB |
Abstract:
In this paper the performance of multi-layered asymmetric-sized finite-buffered Delta Networks supporting multi-class routing traffic is presented and analyzed in the uniform traffic conditions under various loads using simulations. The rationale behind introducing asymmetric-sized buffered systems is to have a better exploitation of available buffer spaces, while the implementation of multi-layered architecture is applied in order to further improve the overall performance of network. The findings of this performance evaluation can be used by network designers for drawing optimal configurations while setting up the network, so as to best meet the performance and cost requirements under the anticipated traffic load and quality of service specifications.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
min-ml-mp-asym.pdf | 487.79 KB |
Abstract:
Multistage Interconnection Networks (MINs) are frequently used for connecting processors in parallel computing systems or constructing high speed networks such as ATM (based on Asynchronous Transfer Mode) and Gigabit Ethernet Switches. New applications require distributed computing implementations, but old networks are too slow to allow efficient use of remote resources. Moreover, multimedia are considered as applications with high bandwidth requirements. Some of them are also sensitive to packet loss and claim reliable data transmission. Specific applications require bulk data transfers for database replication or load balancing and therefore packet loss minimization is necessary in order to increase the performance of them. The demand for high performance multimedia services such as full motion video on demand is becoming an increasingly important driving force in the communication market in the Digital Age. Thus, the performance of MINs is a crucial factor, which we have to take into account in the design of new applications. Their performance is mainly determined by their communication throughput and cell latency, which have to be investigated either by time-consuming simulations or approximated by mathematical models. In this paper we investigate the performance of MINs in order to determine optimal values for hardware parameters under diferent operating conditions.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
mins-cr.pdf | 94.35 KB |
Abstract:
In this paper a novel architecture of dual priority single-buffered blocking Multistage Interconnection Networks (MINs) is presented. We analyzed their performance in the uniform traffic condition under various loads using simulations. We compared the dual priority architecture against a single priority MIN, by gathering metrics for the two most important network performance factors, namely packet throughput and the mean time a packet needs to traverse the network. We demonstrated the gain of the high priority packets against the low priority packets under different configuration schemas. In this paper we focus on studying the influence of the priority bit in the header field of transmitted packets on the performance of high and low priority traffic of a MIN. Performance prediction before actual network implementation and understanding the impact of parameter settings in a MIN setup are valuable assets for network designers for minimizing overall deployment costs and delivering efficient networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Dual_priority_MINs.pdf | 225.64 KB |
Abstract:
In this paper a novel two-priority network schema is presented, and exemplified through its application on single-buffered Delta Networks in packet switching environments. Network operations considered include conflict resolution and communication strategies. The proposed scheme is evaluated and compared against the single-priority scheme. Performance evaluation was conducted through simulation, due to the complexity of the model, and uniform traffic conditions were considered. Metrics were gathered for the two most important network performance factors, namely packet throughput and the mean time a packet needs to traverse the network. The model can also be uniformly applied to several representative networks providing a basis for fair comparison and the necessary data for network designers to select optimal values for network operation parameters.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Two_priority_MINs.pdf | 262.18 KB |
Abstract:
In this paper, a wireless Circular Model over the Distance Vector routing protocol is presented and analyzed. The performance of this algorithm, which is an implementation of Distributed Bellman-Ford algorithm has been evaluated by using the simulation environment of NS-2. We conducted an extensive evaluation study for various mobility schemes in order to incorporate the behavior of nodes and the routing protocol in a real-life hotspot situation. In the test-bed model, while the number of source nodes was allowed to arbitrarily vary, there was exactly one destination node, closely modeling thus real-life situations where a single hotspot/access point exists. Finally, different constant bit rates (CBR) were used in order to estimate the throughput of receiving, dropping rates, the number of lost packets, as well as the average packet delay under various traffic conditions. This study is aimed to help wireless network designers in choosing the best suited routing protocols for their networks, through making explicit performance figures for common network setups.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
DVRP_CISSE2007.pdf | 256.21 KB |
Abstract:
Multilayer MINs have emerged mainly due to the increased need for routing capacity in the presence of multicast and broadcast traffic, their performance prediction and evaluation however has not been studied sufficiently insofar. In this paper, we use simulation to evaluate the performance of multilayer MINs with switching elements of different buffer sizes and under different offered loads. The findings of this
paper can be used by MIN designers to optimally configure their networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
paper_multicast-final.pdf | 272.57 KB |
Abstract:
The performance of Multistage Interconnection Networks (MINs) under hotspot traffic, where some percentage of the traffic is targeted at single nodes, which are also called hot-spots, is of crucial interest. The prioritizing of packets has already been proposed at previous works as alleviation to the tree saturation problem, leading to a scheme that natively supports 2-class priority traffic. In order to prevent hotspot traffic from degrading uniform traffic we expand previous studies by introducing multi-layer Switching Elements (SEs) at last stages in an attempt to balance between MIN performance and cost. In this paper the performance evaluation of dual-priority, double-buffered, multi-layer MINs under single hotspot setups is presented and analyzed using simulation experiments. The findings of this paper can be used by MIN designers to optimally configure their networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Multi-Hotspot-tr.pdf | 360.92 KB |
Differentiated Services (DiffServ) and other scheduling strategies are now widespread in the traditional, “best effort” Internet. These Internet Architectures offer Quality of Service (QoS) guarantees for important customers at the same time as supporting less critical applications of lower priority. Strict priority queuing (PQ), weighted round robin (WRR), and class-based weighted fair queuing (CBWFQ) are three common scheduling disciplines for differentiation of services in telecommunication networks. In this paper, a comparative performance study of the above PQ, WRR and CBWFQ queuing scheduling policies applied on a double-buffered, 6-stage Multistage Interconnection Network (MIN) that natively supports a 2-class priority mechanism is presented and analyzed using simulation experiments. We also consider a 10-stage MIN, to validate that the conclusions drawn from the 6-stage MIN apply to MINs of different sizes. The findings of this paper can be used by MIN designers to optimally configure their networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
PQ-CBWFQ-WRR-MIN-v3i.pdf | 534.53 KB |
Abstract:
Banyan Networks are a major class of Multistage Interconnection Networks (MINs). They have been widely used as efficient interconnection structures for parallel computer systems, as well as switching nodes for high-speed communication networks. The performance of them is mainly determined by their communication throughput and their mean packet delay. In this paper we use a model that is based on a universal performance factor, which includes the importance aspect of each of the above main performance factors (throughput and delay) in the design process of a MIN. The model can also uniformly be applied to several representative networks. The complexity of the model requires to be investigated by time-consuming simulations. In this paper we study a typical (8X8) Baseline Banyan Switch that consists of (2X2) Switching Elements (SEs). The objective of this simulation is to determine the optimal buffer size for the MIN stages under different conditions.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
banyanSwitch.pdf | 103.09 KB |
Abstract:
In this paper the modeling of Omega Networks supporting multi-class routing traffic is presented and their performance is analyzed. We compare the performance of multi-class priority mechanism against the single priority one, by gathering metrics for the two most important network performance factors, namely packet throughput and delay under uniform traffic conditions and various offered loads, using simulations. Moreover, two different test-bed setups were used in order to investigate and analyze the performance of all priority-class traffic, under different Quality of Service (QoS) configurations. In the considered environment, Switching Elements (SEs) that natively support multi-class priority routing traffic are used for constructing the MIN, while we also consider double-buffered SEs, two configuration parameters that have not been addressed insofar. The rationale behind introducing a multiple-priority scheme is to provide different QoS guarantees to traffic from different applications, which is a highly desired feature for many IP network operators, and particularly for enterprise networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
icsnc-2008-tr.pdf | 168.72 KB |
Abstract:
Large swings in the demand for content are commonplace within the Internet. Although Multistage Interconnection Networks (MINs) are fairly flexible in handling varieties of traffic loads, their performance considerably degrades by hotspot traffic, especially at increasing size networks. As alleviation to the tree saturation problem, the prioritizing of packets is proposed leading to a scheme that natively supports multi priority traffic. In this paper the performance evaluation of double-buffered Delta Networks under single hotspot setups, with different offered loads, and 2-class routing traffic is presented and analyzed using simulation experiments. Performance comparison of dual vs. single priority scheme is outlined under hotspot environment, by calculating a universal performance factor, which effectively includes the importance aspect of each of the
two most important performance metrics, namely packet throughput and delay. The findings of this paper can be used by MIN designers to optimally configure their networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
c063.pdf | 289.15 KB |
Abstract:
The main concerns in designing the multistage switching fabrics are speed, throughput, delay and variance of delay for a given bandwidth. The rationale behind using various priority mechanisms is either to offer different quality of service levels to packets or to optimize performance parameters of the network, e.g. minimize internal blocking in the Switching Elements (SEs). We investigated the performance parameters of an enhanced priority (EP) mechanism versus a single priority (SP) one. In the EP scheme, packet priority was computed dynamically and was directly proportional to the transmission queue length of the SE that the packet is currently stored in. Finally, we extended the idea of the priority scheme by proposing a multi-priority (MP) mechanism. In the MP scheme, each SE has two transmission queues per link, with one queue dedicated to high priority packets and the other dedicated to low priority ones. We simulated a multistage network under the uniform traffic condition and concluded that the proposed double-buffered SEs provide higher throughput, and decreased latency.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ICCMSE2007_1501.pdf | 338.39 KB |
Abstract:
Recommender systems aim to propose items that
are expected to be of interest to the users. As one of the most
successful approaches to building recommender systems,
collaborative filtering exploits the known preferences of a group
of users to formulate recommendations or predictions of the
unknown preferences for other users. In many cases, collaborative
filtering algorithms handle complex items, which are described
using hierarchical tree structures containing rich semantic
information. In order to make accurate recommendations on such
items, the related algorithms must examine all aspects of the
available semantic information. Thus, when collaborative filtering
techniques are employed to adapt the execution of business
processes, they must take into account the services’ Quality of
Service parameters, so as to generate recommendations tailored to
the individual user needs. In this paper, we present a collaborative
filtering-based algorithm which takes into account the web
services’ QoS parameters in order to tailor the execution of
business processes to the preferences of users. An offline clustering
technique is also introduced for supporting the efficient and
scalable execution of proposed algorithm under the presence of
large repositories of sparse data.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Draft paper version | 640.09 KB |
Abstract:
In this paper, we present a framework which provides runtime
adaptation for BPEL scenarios. The adaptation is based on (a)
quality of service parameters of available web services (b) quality
of service policies specified by users (c) collaborative filtering
techniques, allowing clients to further refine the adaptation process
by considering service selections made by other clients, (d)
monitoring, in order to follow the variations of QoS attribute values
and (e) on users’ opinions services they have used.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In the context of service-oriented computing, the introduction of the Quality-of-Service (QoS) aspect leads to the need to adapt the execution of programs to the QoS requirements of the particular execution. This is typically achieved by finding alternate services that are functionally equivalent to the ones originally specified in the program and whose QoS characteristics closely match the requirements, and invoking the alternate services instead of the originally specified ones; the same approach can also be employed for tackling exceptions. The techniques proposed insofar, however, cannot be applied in a secure context, where data are encrypted and signed for the originally intended recipient. In this paper, we introduce a framework for facilitating adaptation in the context of secure SOA.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ws-sec-mcis09-tr.pdf | 297.54 KB |
Abstract:
WS-BPEL has been adopted as the predominant method for composing individual web services into higher-level business processes. The designers of WS-BPEL scenarios define at development time the specific web services that will be invoked in the context of the business process they model; in the context however of the current web, where each functionality is offered by multiple service providers, under different quality of service parameters, using a fixed BPEL scenario has been recognized to be inadequate for servicing the diverse needs of business processes clients. To this end, WS-BPEL scenario execution adaptation has been proposed, mainly allowing clients to specify quality of service policies, which drive the dynamic selection of the services that will be invoked. In this paper, we present a framework extending the quality of service-based adaptation mechanisms with collaborative filtering techniques, allowing clients to further refine the adaptation process by considering service selections made by other clients, in the context of the same business processes.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
margaris-adapting-WS.pdf | 365.89 KB |
Abstract:
Organisations nowadays are in the process of developing network-enabled systems through which they deliver electronic services to citizens, customers and enterprises. Often, such services need to be combined in order to cover all aspects of a service consumer¢s life event. The composition of different services though is usually left to the service consumer, who needs to manually locate the individual services and drive the process of obtaining results from some services and feeding them as input to subsequent ones until all relevant services have been executed. Although it would be possible for organisations to improve their level of service through provision of composite services, realizing thus the concept of one-stop government, i.e. by making available mechanisms that would undertake the task of input collection, invocation and execution synchronisation of individual services and delivery of the final result as a reply, such facilities have not been made yet widely available. This shortage stems partly from financial considerations, since the frequent changes in the regulatory framework of both the individual services and in their interoperation requirements or in the technical aspects of the service implementation render the development and maintenance of composite services inexpedient and partly from technical issues, since format or representational incompatibilities in parameters and results hinders automation developments. In this paper we present an active blackboard architecture, which automates the task of service composition based on the semantics of individual services and the data dependencies between them. The blackboard incorporates registries, which can be employed for facilitating service discovery and an execution engine that arranges for dynamic service composition and execution.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ActiveBlackboardIJEG.pdf | 320.26 KB |
Abstract:
Web services are functional, independent components that can be called over the web to perform a task. Web services are provided by organizations to enable others to perform tasks the organization offers online. However, with an ever increasing number of web services, finding the web service that performs a certain task is not always easy. Furthermore, adopting an end-user point of view what is needed is the actual result and not the service per se. It is often the case that more than one service have to be combined to produce the anticipated outcome, e.g. in the case of life-events. To this end, we propose an active, ontology-based blackboard architecture that aims at tackling the problems inherent in dynamic synthesis of composite web services and at facilitating user interaction with complex e-government transactions.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
active-ontology-blackboard-for-interop.pdf | 430.27 KB |
Abstract:
In this paper, we present a framework which incorporates runtime quality of service-based adaptation for BPEL scenarios, allowing for tailoring their execution to the diverse needs of individual users. The proposed framework also caters for automatically resolving system-level exceptions, such as machine outages or network partitionings, while both scenario execution adaptation and exception resolution maintain the transactional semantics that invocations to multiple services offered by the same provider may bear.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
c072.pdf | 85.37 KB |
Abstract:
In this paper, we present a framework which incorporates runtime adaptation for BPEL scenarios. The adaptation is based on (a) the quality of service parameters of available services, allowing for tailoring their execution to the diverse needs of individual users and (b) collaborative filtering techniques, allowing clients to further refine the adaptation process by considering service selections made by other clients, in the context of the same business processes. The proposed framework also caters maintaining the transactional semantics that invocations to multiple services offered by the same provider may bear.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. The paper is available through ScienceDirect
Attachment | Size |
---|---|
scico-tr.pdf | 1.52 MB |
Abstract:
In this technical report we give examples on how Quality of Service-based and Collaborating filtering-based techniques can be combined to drive the adaptation of WS-BPEL scenario execution.
Attachment | Size |
---|---|
sdbs-tr-14-002-v3.pdf | 546.12 KB |
Abstract:
Web services have become the key technology in business processes management. Business processes can be self-contained or be composed from sub-processes; the latter category is typically specified using the Web Services Business Process Execution Language (WS-BPEL) and executed by a Web Services Orchestrator (WSO). During the execution however of such a composite service, a number of faults stemming from the distributed nature of the SOA architecture, e.g. network or server failures may occur. WS-BPEL includes provisions for exception handling, which can be exploited for detecting such failures; once detected, a failure can be resolved by invoking alternate web service implementations that perform the same business task as the failed one. However, the inclusion of such provisions is a tedious assignment for the business process designer, while additional effort would be required to maintain the BPEL scenarios in cases that some alternate WS implementations cease to exist or new ones are introduced. In our research we are developing a framework for automating handling of that kind of exceptions. The proposed solution employs a pre-processor that enhances BPEL scenarios with code that detects failures, discovers alternate WS implementations and invokes them, fully thus resolving the exception. Alternate WS implementation discovery is based on service relevance, which takes into account both functional and qualitative properties of web services.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
enhancing-BPEL-scenarios.pdf | 248.59 KB |
Abstract:
WS-BPEL has become the predominant technology for specifying and executing composite business processes within the Service
Oriented Architecture. During the execution however of such a composite business process, a number of faults stemming from
the distributed nature of the SOA architecture (e.g. network or server failures) may occur. To this end, the WS-BPEL scenario
designer must exploit the provisions offered by WS-BPEL to catch exceptions owing to system failures and resolve them,
typically by invoking some alternate equivalent web service that is expected to be reachable and available. The task of system fault
handler specification is though an additional burden for the WS-BPEL scenario designer and the presence of such handlers within
the WS-BPEL scenario necessitates additional maintenance activities, as new alternate services become available or some of
the specified ones are withdrawn. In this paper, we propose a middleware-based framework for system exception resolution,
which undertakes the tasks of failure interception, discovery of alternate services and their invocation. The middleware is
deployed and maintained independently of the WS-BPEL scenarios, removing thus the need for specifying and maintaining
system faults within the scenarios. We also present performance measures, establishing that the overhead imposed by the addition
of the proposed middleware layer is minimal.
Article available through the ACM Author-izer service:
Attachment | Size |
---|---|
iiWAS_tr.pdf | 165.23 KB |
Abstract:
WS-BPEL scenario execution adaptation has been proposed by numerous researchers as a response to the need of users to tailor the WS-BPEL scenario execution to their individual preferences; these preferences are typically expressed through Quality of Service (QoS) policies, which the adaptation mechanism considers in order to select the services that will ultimately be invoked to realize the desired business process. In this paper, we study the potential to parallelize the execution of the WS-BPEL scenario in order to minimize its response time and/or achieving higher scores in the other qualitative dimensions, such as cost, reliability, etc., at the same time. We also describe, develop and validate a parallelization algorithm for realizing the proposed enhancements.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Draft paper version | 346.69 KB |
Abstract:
WS-BPEL scenario execution adaptation has been proposed by numerous researchers as a response to the need of users to tailor the WS-BPEL scenario execution to their individual preferences; these preferences are typically expressed through Quality of Service (QoS) policies, which the adaptation mechanism considers in order to select the services that will ultimately be invoked to realize the desired business process. In this paper, we consider a number of issues related to WS-BPEL scenario adaptation, aiming to enhance adaptation quality and improve the QoS offered to end users. More specifically, with the goal of broadening the service selection pool we (a) discuss the identification of potential services that can be used to realize a functionality used in the WS-BPEL scenario and (b) elaborate on transactional semantics that invocations to multiple services offered by the same provider may bear. We also describe and validate an architecture for realizing the proposed enhancements.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Draft paper version; (c) IEEE; Proceeding version: http://ieeexplore.ieee.org/document/7399085/ | 582.72 KB |
Abstract:
In this technical report, we describe and exemplify the transformations applied by the WS-BPEL preprocessor, in order to produce a BPEL scenario that can be adapted according to QoS specifications, in the architecture described in [1].
[1] adopts a greedy algorithm for performing adaptation, i.e. it uses only the QoS specifications pertaining to the first invoke activity IA to a specific service provider S, so as to decide the service provider to which both IA and further invocations to operations provided by S will be directed.
The greedy algorithm may result in suboptimal bindings, while in some cases it may even lead to situations where the middleware is unable to find any appropriate service selection for fully servicing the BPEL scenario, albeit such a path does exist. To this end, a service provider-level adaptation strategy can be employed: the transformed scenario may communicate to the ASOB middleware [1] the information concerning all operation invocations to a specific partner link before the first invocation an operation provided by the specific partner link is executed, and therefore the ASOB middleware can exploit this information to remedy the problems stemming from the greedy nature of the adaptation method specified in [1].
Attachment | Size |
---|---|
preprocessor-transformations-techRep.pdf | 240.7 KB |
Abstract:
In this paper, we introduce algorithms for pruning and aging user ratings in collaborative filtering systems, based on their oldness, under the rationale that aged user ratings may not accurately reflect the current state of users regarding their preferences. The aging algorithm reduces the importance of aged ratings, while the pruning algorithm removes them from the database. The algorithms are evaluated against various types of datasets. The pruning algorithm has been found to present a number of advantages, namely (1) reducing the rating database size, (2) achieving better prediction generation times and (3) improving prediction quality by cutting off predictions with high error. The algorithm can be used in all rating databases that include a timestamp and has been proved to be effective in any type of dataset, from movies and music, to videogames and books.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Draft paper version; (c) IEEE; Proceedings version: http://ieeexplore.ieee.org/document/7849920/ | 391.26 KB |
Abstract:
BPEL/WSBPEL is the predominant approach for combining individual web services into integrated business processes, allowing for the specification of their sequence, control flow and data exchanges. BPEL however does not include mechanisms for considering the invoked services¢ Quality of Service (QoS) parameters and thus BPEL scenarios can neither tailor their execution to the individual user¢s needs or adapt to the highly dynamic environment of the WEB, where new services may be deployed, old ones withdrawn or existing ones changing their QoS parameters. Moreover, infrastructure failures in the distributed environment of the web introduce an additional source of failures that must be considered in the context of QoS-aware service execution. In this work we propose a framework for addressing the issues identified above; the framework allows the users to specify the QoS parameters that they require and it undertakes the task of locating and invoking suitable services. Finally, the proposed framework intercepts and resolves faults occurring during service invocation, respecting the QoS restrictions specified by the consumer.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
icws09_v2_6.pdf | 306.71 KB |
Abstract:
WS-BPEL is widely used nowadays for specifying and executing composite business processes within the Service Oriented Architecture (SOA). During the execution however, of such business processes, a number of faults stemming from the nature of SOA (e.g. network or server failures) may occur. The WS-BPEL scenario designer must therefore use the provisions offered by WS-BPEL to catch these exceptions and resolve them, usually by invoking some equivalent web service that is expected to be reachable and available. System fault handler specification is though an additional task for the WS scenario designer, while the presence of such handlers within the scenario necessitates extra maintenance activities, as new alternate services emerge or some of the specified ones are withdrawn. In this paper, we propose a middleware-based framework for system exception resolution, which undertakes the tasks of failure interception, discovery of alternate services and their invocation. The process of selecting the alternate services to be invoked can be driven by process consumer-specified QoS policy, specifying lower and upper bounds for each QoS attribute as well as the importance of each QoS parameter. Moreover, the middleware arranges for bridging syntactic differences between the originally invoked services and functionally equivalent replacements to it, by employing XSLT-based transformations. The middleware is deployed and maintained independently of the WSBPEL scenarios, removing thus the need for specifying and maintaining system fault handlers within the scenarios. We also present performance measures, establishing that the overhead imposed by the addition of the proposed middleware layer is minimal.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
iiwas08_extended_tr.pdf | 435.53 KB |
Abstract:
Web services have become the leading technology for application-to-application (A2A) communication over distributed and heterogeneous environments. Both academia and industry have strived to enable useful service collaborations among distributed systems without any human intervention. Web service composition can be used to this end, to achieve business automation within one company or realize business-to-business (B2B) integration of heterogeneous software and cross-organizational computing systems. Service composition pro-vides added value, when a web service composition itself becomes a higher level composite web service. However, as business processes are long-lasting transactions, exceptions may often occur, necessitating the replacement of a service component which has been made unavailable, hindering the completion of some business process. In this paper we present an exception resolving approach based on discovering replacement components that are functionally equivalent, taking also into account criteria for qualitative substitutability. The proposed solution introduces the Service Relevance and Replacement Framework (SRRF) which undertakes exception handling.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
exception-resolution.pdf | 134.56 KB |
Abstract:
Web services are functional, independent components that can be called over the web to perform a task. Besides being used individually to deliver some well-specified functionality, web services may be used as building blocks that can be combined to implement a more complex function. In such compositions, typically some web services produce results that are used as input for web services that will be subsequently invoked. In the execution schemes currently employed, web services producing intermediate results deliver them to some "coordinating entity", which arranges the forwarding of these intermediate results to web services that require them as input. In this paper we present an execution scheme that employs direct communication between producers and consumers of intermediate results. Besides performance improvement stemming from reduction of network communication, this scheme permits consumer web services to employ simpler authenticity and integrity verification algorithms on incoming parameters, when the producer web service is considered trustworthy.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
web-service-streamlining.pdf | 460.44 KB |
Abstract
Since the beginning of the electronic era, public administrations and enterprises have been developing services, through which citizens, businesses and customers can conduct their transactions with the offering entity. Each electronic service contains a substantial amount of knowledge in the form help texts, rules of use or legislation excerpts, examples, validation checks etc. This knowledge has been extracted from domain experts when the services were developed, especially in the phases of analysis and design and was subsequently translated into software. In the latter format though, knowledge cannot be readily used in organizational processes, such as knowledge sharing and development of new services. In this paper, we present an approach for reverse engineering electronic services, in order to create knowledge items of high levels of abstraction, which can be used in knowledge sharing environments as well as in service development platforms. The proposed approach has been implemented and configured to generate artifacts for the SmartGov service development platform. Finally, an evaluation of the proposed approach is presented to assess its efficiency regarding various aspects of the reverse engineering process.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
eservice-reveng-rep.pdf | 1.29 MB |
Abstract
Although electronic transaction services are considered to be a necessity for e-government, it has not been possible insofar to unleash their full potential. E-forms are central to the development of e-government, being a basic means for implementing the majority of the public services considered as required for local and central public administration authorities. In this paper, we present an object-oriented model for e-form-based administrative services, which spans the e-service lifecycle, including development, deployment and use by enterprises and citizens, data collection and communication with legacy information systems. The proposed approach encompasses semantic, structural and active aspects of e-forms, providing thus an inclusive framework for modelling electronic services.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
oo-approach-for-designing-eforms.pdf | 262.54 KB |
Abstract:
With the need for electronic services to be developed and deployed more and more rapidly, it is imperative that concrete models of electronic services are developed, to facilitate systematic work of electronic service stakeholders, concrete semantics and coherent representations across services developed within an organisation. Using the XML language to develop such a model, offers a number of additional advantages, such as rich semantics, facilitation of data interchange, extensibility, high abstraction levels and possibility for mechanical processing. In this paper we present the design aspects of an XML model for electronic services, which has been used for building a repository of interlinked elements representing e-services. A web-based interface for the management of this repository and a tool for automatically compiling e-service descriptions into executable images have been developed alongside. The model has been evaluated by a mixture of electronic service stakeholders, and the results of this evaluation are also presented.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
xml-model.pdf | 242.92 KB |
Abstract:
On their route to the electronic era, organisations, release on the web more and more complex form-based services, which require users to enter numerous data items interrelated by business rules. In such a context, it is crucial to provide optimal form layouts, in order to present the service users with interfaces that facilitate their work. This paper presents an integrated environment, which exploits data item interrelations manifested by the business rules (or validation checks) to optimise the layout of the web forms comprising a complex service. The approach is validated through its application on a tax return form e-service.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
exploiting-semantics-and-validation-checks.pdf | 1.27 MB |
Abstract:
Transaction services offered by public authorities vary from simple forms with few fields to multi-form compound documents with hundreds of input areas. In the latter case, field placement within forms is of particular importance for facilitating the filling and error correction processes. In this paper we present an approach to improving the form layout by exploiting validation checks that are usually associated with electronic forms, as well as semantic information that may be attached to form fields by designers.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
improving-eform-layout.pdf | 112.75 KB |
Encyclopedia of E-Commerce, E-Government and Mobile Commerce, 2006
Abstract:
In this work, the usage of ontologies for meeting requirements related to e-service composition, e-service cataloguing, change management and administrative responsibility is examined. An ontology for e-government services is presented, covering various aspects of services, including administrative responsibility, meta-data, involved documents and legislation. Both the development and usage phase of the ontology are covered and directions for further exploitation of the potential offered by the ontological representation are given.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ontology-for-egov-public-services.pdf | 253.68 KB |
Abstract
Although electronic transaction services are considered to be a necessity for e-government, it has not been possible insofar to unleash their full potential. E-forms are central to the development of e-government, being a basic means for implementing the majority of the public services considered as required for local and central public administration authorities. In this paper, we present an object-oriented model for e-form-based administrative services, which spans the e-service lifecycle, including development, deployment and use by enterprises and citizens, data collection and communication with legacy information systems. The proposed approach encompasses semantic, structural and active aspects of e-forms, providing thus an inclusive framework for modelling electronic services.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
reveng-6page-final.pdf | 120.18 KB |
Abstract:
Filling and submission of electronic forms is a key issue for e-government, since most electronic services offered in this context include some variant of electronic forms. Insofar, IT experts are placed in the centre of electronic forms services lifecycles, undertaking the analysis, design, implementation and maintenance phases. This practice, however, implies various impediments, such as the need for multitudinous teams with diverse skills. In this paper we present experiences from developing and maintaining a set of electronic services for the Greek Ministry of Finance, and propose an approach to handling electronic services' lifecycle that balances responsibilities between domain experts and IT professionals. This approach enables a more holistic management of the electronic service lifecycle, by employing modelling and representation in high levels of abstraction and incorporating tools for automatically generating operative service instances from these high-level descriptions.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
framework-for-transactional-services.pdf | 402.23 KB |
Mobile commerce is gaining significant importance in the recent years as an alternative option of e-commerce for the moving user. The mobile applications through which mcommerce takes place operate in highly dynamic environments with diverse characteristics and under varying conditions. The characteristics and conditions of these environments .called context. should be exploited in order to provide adaptive services; services that offer a suitable user experience and deliver innovative and enhanced capabilities that will facilitate user interaction, attract new customers and maintain existing ones. The goal of adaptivity is realized through the adaptation of user interface, functionality and content of applications using the context information. Therefore, context-awareness constitutes an essential aspect . almost a requirement . of mobile services. In order to realize context-aware services, there is a necessity to capture the context information from its sources, process and distribute it to the software components that will use it. In this paper, we propose a software architecture for context information management suitable for m-commerce applications. We describe the functionality and characteristics of its components, as well as the interaction among these different components.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
benou-SoftwareArchitecture.pdf | 196.21 KB |
Mobile commerce applications operate in highly dynamic environments with diverse characteristics and interesting challenges. The characteristics and conditions of these environments –called context–, can be exploited to provide adaptive mobile services, in terms of user interface, functionality and content, in order to offer more effective m-commerce. Today, building adaptive mobile services is a complex and time-consuming task due to the lack of standardized methods, tools and architectures for the identification, representation and management of the context. Addressing some of these issues, recent works have provided formal extensions for various stages of the m-commerce application lifecycle, such as extended UML class diagrams for building design models and have used context parameters in order to offer adaptive applications. Using these works as the basis, in this paper we propose a context management architecture, which accommodates the requirements that have been identified for m-commerce applications. The proposed architecture is evaluated in terms of completeness, complexity, performance and utility, and compared against other approaches proposed in the literature regarding its suitability for supporting context-aware m-commerce applications.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Authors' accepted version. The final publication is available at www.springerlink.com | 660.84 KB |
Abstract:
In the context of electronic government, e-services are a valuable instrument for offering high quality services to enterprises and individual citizens alike. While developing an e-service, it is usually possible to reuse elements that have already crafted for other e-services, such as personal detail forms or widgets for collecting social security numbers, decreasing thus both development effort and the time for deployment. A more generic framework for supporting reusability in development of e-services includes the identification of reusable objects, the creation and population of a repository containing such components, and the empowerment of developers with tools allowing for location, retrieval and adaptation of components for suiting their specific tasks. In this paper, we conduct an analysis to recognise e-service component that offer reusability opportunities and we present facilities and methods to enable e-service developers to exploit these opportunities while developing electronic services.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
jcmse.pdf | 325.86 KB |
Studying consumer behaviour and usage of environmental determinants in the mobile services domain contributes to the identification of context information which is critical for the effective operation of mobile commerce applications. Exploiting this information towards providing enhanced and innovative mobile services offers a competitive advantage within the highly demanding domain of m-commerce applications. However, in order to effectively exploit such context information, there is a need to design the necessary methods, software tools and information systems that will be employed for collecting, processing and disseminating this information. In this paper we develop a theoretical framework which defines the context information necessary for m-commerce applications, taking into account relevant marketing dimensions as well as privacy protection perspectives. Then, this framework is operationalized through the design of an appropriate software architecture which enables the standardization and management of context information.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Authors' accepted version. The final publication is available at www.springerlink.com | 242.91 KB |
Abstract:
End-user development (EUD) aims to empower end-users with the necessary tools to implement their own software. In this sense domain expert user development can be viewed as a special case of EUD. Domain experts can be considered to be a special case of end-users who possess the necessary knowledge of how the software should operate, what tasks it has to carry out, which business rules need to be enforced, validation checks to perform, etc. It has to be noted that in some cases domain experts will not use the produced software themselves, this software however will indirectly support their work, e.g. software developed by tax officers (domain experts) to be used by tax payers (actual end-users) simplifies the subsequent work of tax officers through minimization of errors, population of electronic data repositories etc.
Using a user-centred software engineering paradigm domain expert users will work along with software developers to create specifications for the software to be implemented by the latter group. This process is usually iterative. Domain experts will be questioned by developers, developers will design a first prototype, the domain experts will most probably ask for changes, developers will come back with an altered prototype, etc. Since both user groups are usually involved in other assignments as well, this process can be time-consuming. Impedance mismatch problems, i.e. problems in the communication between the domain experts and the IT staff due to different backgrounds, perspectives and terminology result into additional delays within this phase. An alternative to this would be to help domain experts to create the software with a minimum or no involvement of IT personnel. This is the approach adopted in the SmartGov project . In the framework of SmartGov a knowledge-based platform was developed that assists public sector employees with suitable domain expertise to generate online transaction services by simplifying their development, maintenance and integration with installed IT systems.
Article available through the ACM Author-izer service:
Attachment | Size |
---|---|
domain-expert-development.pdf | 324.47 KB |
Abstract
E-forms have a central role in a significant number of e-government services. This paper presents a knowledge-based technical platform aiming to assist public sector employees to generate online transaction services by simplifying their development, maintenance and integration with installed IT systems. At the heart of this platform lies the knowledge and transaction services repository. This repository consists of a number of XML document types that incorporate all necessary details for creating and managing online transaction services. The main underlying idea is to provide a platform with intuitive interfaces that can be used directly by domain experts thus minimising the need for personnel with IT skills. This platform is currently under development within the IST SmartGov project.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
e-challenges.pdf | 137.04 KB |
Abstract:
Electronic commerce, nowadays, is trying to extend its target audience and elevate the quality of services offered to end-users. Two important directions towards meeting these goals are the embracement of mobile users, whose number grows following the advent of communication technologies, and the inclusion of context-aware features in the delivered services to improve the efficiency of the dialogues between users and systems. The context taken into account may involve characteristics regarding the human user, the geographical location and the time of access, the devices employed to access the service, the network through which the user communicates with the system, the nature of the transaction carried out and so forth. It is clear that the development and successful operation of mobile and context-aware e-commerce introduce new challenges. In order to tackle such challenges, new methodologies, tools, architectures and platforms should be made available to assist analysts, designers, developers and operators in handling the various phases of the Mobile and Context-Aware E-Commerce Services (MCACS) lifecycle.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
editorial-jeco-si.pdf | 105.39 KB |
Abstract:
Reusability is the degree to which a software component or other work can be used in more than one programs or software systems. e-Government is a prosperous area for the application of reusability, since the services offered to citizens from the same administration, or even different administrations, have common portions that can be developed only once and reused wherever appropriate. In this paper we present the design and implementation of an electronic service development environment which offers the potential to reuse components that have already been implemented, for the realization of new services.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ReusabilityInGovernmentEServices.pdf | 207.17 KB |
Abstract:
Electronic government employs electronic services to facilitate interaction with citizens and enterprises and deliver a rich and high quality spectrum of services. Development of electronic services can be greatly assisted, both in terms of development cost and roll-out time, by exploiting the reusability inherent in them. Reusability may be promoted by identifying reusable objects in the context of electronic service development, building and populating a repository with such components and providing the means for developers to locate, extract and adapt them to suit the task at hand. In this paper we analyse electronic services to recognise reusable components and present means and techniques that empower electronic service developers to build electronic services through reusable components.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
reusability-eservice-development.pdf | 253.61 KB |
Mobile commerce applications adhering to anytime and anywhere paradigm, required to be flexible. They should be able to adapt their interface, services and content towards a certain context. Several proposals for definition of context have been already proposed originating from various areas related to mobile commerce. However, an integrated, formal and methodological approach for the determination and representation of context, adjusted to special characteristics of mobile commerce applications, has not been insofar presented. This is the challenge we address in this paper, through a conceptual model that includes: i) a clear and formal definition of context, ii) the depiction of its specific characteristics as metadata, iii) a methodology for its determination and iv) the presentation of an extension of class diagrams of UML for its representation, all of them tailored to the special nature of mobile commerce applications.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ctx-model-tr.pdf | 297.08 KB |
Abstract:
Transactional services are an indispensible part of e-government, since provision of services to citizens and enterprises,as well as the interaction between the government and citizens-enterprises are modelled mainly through such services. Latest quantifications, however, show that the development of such services lags behind as compared both to the expectations of citizens-enterprises and to the efforts made by governments. This can be attributed, amongst other reasons, to the "traditional" approach to electronic service development, which treats each electronic service as an isolated software project. In this paper, we propose an e-service development platform, which covers the whole lifecycle of transactional services and facilitates the analysis, development, deployment and maintenance of these services.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
lifecycle-transactional-services.pdf | 191.12 KB |
Abstract:
Although form-based transactional services are fundamental to electronic government activities, their widespread does neither meet the citizen's expectations, nor the potential offered by state-of-the-art technologies. Besides any bureaucratic impediments, the primary reason for this is that traditional software engineering approaches cannot satisfactorily handle all the aspects of electronic services lifecycle. In this paper we present experiences from developing and maintaining a set of electronic services for the Greek Ministry of Finance, and propose a new approach for handling electronic service projects. The proposed approach has been successfully employed for developing extensions to the existing services, as well as some new ones.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Transactional-egov-services-integrated-approach.pdf | 151.98 KB |
Abstract:
The tax collection process has been described as a bureaucratic one, that confines citizen involvement to the role of passive fulfillment of administrative and financial obligations. Active citizen participation, in the form of some political say on the allocation of collected taxes which could potentially improve and further legitimize tax collection, is not a part of the traditional taxation model. In this paper we describe a new taxation model which, in the spirit of participatory budgeting approaches, supports active citizens’ participation on decision making regarding tax funds allocation.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Metteg07-PBTaxation.pdf | 221.54 KB |
Abstract:
E-government initiatives have been proven to deliver significant benefits, both for suppliers of electronic services (public authorities and organisations) and for the public, to whom services are addressed. However, the pace with which electronic services are made available and adopted is lower than planned or expected; governments tend to be slow in releasing new services, and citizens often prefer to conduct business with the government through paper forms and physical presence, rather than using online methods. This indicates that certain barriers exist that hinder the transition to electronic services. In this paper, we present the results of a survey among electronic service stakeholder groups, to identify the most important barriers to electronic service development. Documentation of barriers is considered important, since administrations may take certain measures to overcome them. Hints on how specific barriers may be overcome are also given in this paper.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Barriers-to-electronic-services-development.pdf | 227.81 KB |
Abstract:
This paper presents the current model for taxation and distribution of the taxes to government activities, and an alternative model is presented, according to which tax-payers can determine, to some extent, the way that the taxes they pay will be spent. The goal of this proposal is to increase citizen involvement and system transparency. The model proposal comes complete with an architecture that supports the realization of the proposed scheme.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
e-Taxation.pdf | 57.23 KB |
Abstract
Both the business sector and the government are nowadays embracing Internet technologies in order to provide high quality on-line services to their "target groups". In both cases, service providers are trying to transform web surfers, casually visiting their web sites to seek information, to users of their electronic services, i.e. e-consumers and. e-citizens. In this paper, we address the similarities and the differences between the business and the government, when they act as service providers, with respect to the factors for successful service and the issues that must be addressed.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ecitizens-vs-econsumers.pdf | 75.48 KB |
Abstract:
A requirement for electronic government initiatives to succeed is the ability to offer a citizen-centric view of the government model. The most widely adopted paradigm supporting this task is the life event model, which combines basic services offered from multiple public authorities into a single, high-level service that corresponds to an event in a citizen's life. This composition is not always straightforward though, because the constituent services are generally developed in an independent fashion, using incompatible input and output formats; moreover the task of synchronising the documents required and produced by the services is tedious to implement and costly to maintain, since changes to requirements and legislation necessitate continuous updates to this scheme. In this paper, we present a blackboard architecture that can be used to deliver life-event oriented services to the citizens. The blackboard proposed for this architecture is an active one, undertaking the tasks of conversions, where appropriate. The blackboard couples a data flow approach with event-condition-action rules to enable dynamic formulation of life-event services, decentralising their development and maintenance.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
blackboard-oriented-arch-for-e-gov.pdf | 429.63 KB |
Abstract:
Having realised the benefits resulting from delivering on-line public services in the context of electronic government, administrations strive to extend the spectrum of services offered to citizens and enterprises, as well as to engage multiple communication channels in service delivery, in order to increase the target audience and, consequently, the service effectiveness. Insofar, however, only the web channel has been sufficiently used for service delivery, whereas other channels have not been adequately exploited. One of the main reasons of this lag is the cost incurred for the development and maintenance of multiple versions of an electronic service, each version targeted to a different platform. In this paper, we present an approach and the associated tools for developing and maintaining electronic services that allows the automated production of different versions of the electronic service, each targeted to a specific platform.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
multichannel-final.pdf | 492.71 KB |
Abstract:
eConsultations constitute an effective means to inclusive and informed participation of citizens and society in policy, decision and law formulation processes, and an answer to democratic deficit issues. eConsultation platforms need to support all stages of consultation processes including agenda setting and topics raising, legislation proposal publicity, notification of developments, proposal debate and commentary, collection, analysis and synthesis of views. In this paper we present the design of an open platform assisting policy makers and the civil society in the set-up, enactment, management and federation of inclusive and informed digital consultations. The proposed platform employs semantic techniques, such as semantics, content annotation and summarization to support the consultation processes and provide targeted and digested information to participants, and facilitates tailoring of eConsultation procedures by offering basic eConsultation activities as building blocks, which can be combined according to contextual needs. The platform also enables distinct eConsultation processes to be federated, allowing the exchange of information, which may be subject to different semantic annotations and classifications, according to the rules of each eConsultation process.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
eConWork.pdf | 158.39 KB |
Abstract:
Firms and organizations are increasingly exploiting electronic channels to reach their customers and create new business opportunities. To this end, electronic shops have been developed, either offering products from a single firm or encompassing multiple individual electronic stores, comprising thus electronic shopping malls. Two main concerns for e-commerce are personalization and enhancement of user experience. Personalization addresses the ability to offer content tailored to the preferences of each user or user group. Preferences may be explicitly declared by the user, or derived by the system through inspecting user interaction; if the system dynamically reacts to changes of visitor behavior, it is termed as adaptive. Enhancement of user experience is another major issue in e-commerce, given that 2D-images and texts on the screen are not sufficient to provide information on products aspects such as physical dimensions, textures and manipulation feedback. Multimedia presentations can also be used as a means for .information acceleration. for promoting "really new" products. This article aims to specify a system that exploits capabilities offered by adaptation and VR technologies to offer e-shoppers personalized and enhanced experiences, while addressing challenges related to the cost, complexity and effort of building and maintaining such a system.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
adapt-vr-mall.pdf | 196.48 KB |
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Exploiting_Context_in_Mobile_Applications.pdf | 76.66 KB |
Abstract:
Documents submitted by citizens through electronic services deployed in the context of e-Government must usually undergo processing by some organisational information system, in order to complete the citizens¢ requests and for the reply to be returned to the citizen. The integration, however, of the e-service delivery platform and the organisational information system is often hindered for a number of reasons, including security considerations, platform diversity or idiosyncrasies of legacy information systems. In this paper we present a generic method for providing seamless communication between the two platforms, enabling the full integration of documents submitted through electronic services into the organisational workflow, leveraging thus the quality of services offered to the citizens and facilitating e-service development and operation.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
integrating-eg.pdf | 140.13 KB |
Abstract:
Public transaction services (such as e-forms), although perceived the future of e-government have not yet realised their full potential. E-forms have a significant role in e-government, as they are the basis for realising most of the twenty public services that all European Union member states have to provide to their citizens and businesses. The aim of this paper is to present a knowledge-based platform to assist public sector employees to generate online transaction services by simplifying their development, maintenance and integration with already installed IT systems.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
kmgov-article.pdf | 436.31 KB |
Abstract:
Public transaction services (such as e-forms) although perceived the future of e-government have not yet realised their full potential. E-forms have a significant role in e-government, as they are the basis for implementing most of the twenty public services that all member states have to provide to their citizens and businesses. The aim of the SmartGov project is to specify, develop, deploy and evaluate a knowledge-based platform to assist public sector employees to generate online transaction services by simplifying their development, maintenance and integration with already installed IT systems. This platform will be evaluated in two European countries (in one Ministry and one Local Authority). This paper outlines key issues in the development of the SmartGov system platform.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
smartgov-kb-for-transactional.pdf | 225.94 KB |
Abstract:
The ever changing environment information systems model, and in particular e-government ones, intensifies the need for systems that are able to easily, efficiently and transparently adapt to changing environments. Accommodating unanticipated changes implies that systems must be able to adapt to changes occurring in and evolve in step with their changing environment. Adaptation is concerned with monitoring, analysing and understanding the patterns of the user's interaction with the system. Similarly, an information system is said to be evolutionary if it can be purposefully used in a dynamic environment. E-government information systems, in virtue of their nature and function, are driven by the need to adapt and evolve. This suggests that the design and implementation of such systems must provide the necessary infrastructure for evolution and adaptability. In other words, e-government information systems must abide to the Tailorable Information Systems paradigm. In this work we present a case study for the development of a Tailorable e-government information system.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
tailorability-for-egovernment.pdf | 60.78 KB |
Abstract:
Real-world information, knowledge and procedures after which information systems are modeled are generally of dynamic nature and subject to changes, due to the emergence of new requirements or revisions to initial specifications. E-government information systems (eGIS) present a higher degree of volatility in their environment, since requirement changes may stem from multiple sources, including legislation changes, organizational reforms, end-user needs, interoperability and distribution concerns etc. To this end, the design and implementation of eGIS must adhere to paradigms and practices that facilitate the accommodation of changes to the eGIS as they occur in the real world. In this work, we present a role-based model for designing and implementing eGIS that can dynamically accomodate changes, providing the necessary facilities for modeling multiple aspects of the same real-world entities and delivering context-specific behaviour.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
tailorable-egov-information-systems.pdf | 144.32 KB |
Benou Poulcheria, Vassilakis Costas
Technical report TR-SSDBL-11-001
Attachment | Size |
---|---|
cma-tr_final.pdf | 273.82 KB |
Abstract:
In order to cover the ever-increasing need for more direct and easy access to information, new information access means need to be devised or existing ones need to be further exploited. In this paper, we present a mini Web Browser for ISDN card phones, which enables this widespread device to be used for accessing information in the World Wide Web. The implemented web browser supports HTML and WML pages, while special care was taken to tackle the limitations imposed by the ISDN card phone¢s hardware, such as small screen, limited keyboard, scarce processing and memory resources. One of the techniques employed to increase the capabilities of the ISDN card phone browser was the introduction of a proxy server, which transforms demanding media types to formats that can be handled by the ISDN card phone hardware.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
mini-browser-f.pdf | 141.52 KB |
Abstract
In the information era enterprises strive to be productive and efficient. One feature of this goal is to engage their employees in education programmes, help them gain new experiences and knowledge and adapt to an ever-changing working environment. Such programmes require thorough design in order to achieve satisfactory results. Lately, enterprises recognising the role technology can play in the education of their employees, have adopted systems that supplement the traditional educational model with mechanisms that enable the sharing of experiences and knowledge. In this paper we describe an architecture and a system prototype that allows users to search easily for information, interact with colleagues and share experiences, to compose and disseminate best practices and knowledge. The design of this system is based on insights gained from the operation of the Greek Taxation System.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
support-lifelong-learning.pdf | 295.59 KB |
Abstract:
The paper presents an integrated environment which enables museum personnel to catalogue and at the same time publish online museum exhibits. The system is based on international standards and is highly customisable to cater the needs of a variety of museum types. Moreover, the underlying database allows storing for the same exhibit documentations for different audiences and in multiple languages, while it is extendable to accommodate new media types, languages, exhibits and information categories. The administrative part of the environment permits the restriction of certain functions to specific personnel roles, enforcing thus a general museum security policy regarding access to and modification of information. The environment presented is currently in use at the Athens University History Museum.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
museums2-tr.pdf | 341.22 KB |
Abstract:
This paper describes a technique for interceding between users and the information that they browse. This facility, that we term 'dynamic annotation', affords a means of editing Web page content 'on-the-fly' between the source Web server and the requesting client. Thereby, we have a generic means of modifying the content displayed to local users by addition, removal or reforming any information sourced from the World-Wide Web, whether this derives from local or remote pages. For some time, we have been exploring the scope for this device and we believe that it affords many potential worthwhile applications. Here, we describe two varieties of use. The first variety focuses on support for individual users in two contexts (second-language support and second language learning). The second variety of use focuses on support for groups of users. Once again, this is illustrated in two contexts (intra-group support and inter-group support). These differing applications have a common goal which is to enrich the knowledge content of the materials placed before the user. Dynamic annotation provides a potent and flexible means to this end.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
contentenrichment-paper.pdf | 1.17 MB |
Abstract:
Web sites employ dynamically generated pages for content delivery more and more often, in order to increase their flexibility and provide up-to-date information. This practice, however, increases server load dramatically, since each request results to the execution of code, which may involve processing and/or access to information repositories. In this paper we present a scheme for maintaining a server-side cache of dynamically generated pages, allowing for cache consistency maintenance, without placing heavy burdens on application programmers. We also present insights to architecture scalability and some results obtained from conducted experiments.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
controlled-caching-dynamic-web.pdf | 289.37 KB |
Abstract:
The paper addresses issues related to client/server technologies and specifically the effectiveness of server-side programming techniques. The motive for this study was the need to create a lightweight and dynamic navigational aid for use in a web site. Towards this goal a number of possible solutions were considered and for two of them an experiment was run to determine the best suited for our case.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ActiveWebPaper.pdf | 135.59 KB |
Abstract:
The construction of multilingual web sites is probably the best answer to addressing the problem of the diverse cultural background of the Internet community. However, developing multiple instances of the same site in different languages induces increased overhead for both the implementation and the maintenance phase. The paper reviews current techniques and describes an alternative to constructing multilingual web sites, which eases the development and maintenance phases, without possessing any of the drawbacks of existing tools. The paper concludes proposing possible future enhancements.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
ml-web-site.pdf | 134.82 KB |
Abstract
An apparent limitation of existing Web pages is their inability to accommodate differences in the interests and needs of individual users. The present paper describes an approach that dynamically customises the content of public Web-based information via an interceding 'enhancement server'. The design and operation of this system is described with examples drawn from two current versions. Indications from early trials support the view that this approach affords considerable scope for accommodating the needs and interests of individual Web users.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
serving-enhanced-hypermedia.pdf | 401.76 KB |
Abstract
The wiki technology is increasingly being used in corporate environments to facilitate a broad range of tasks. This survey examines the use of wikis on a variety of organisational tasks that include the codification of explicit and tacit organisational knowledge and the formulation of corporate communities of practice, as well as more specific processes such as the collaborative information systems development, the interactions of the enterprise with third parties, management activities and organisational response in crisis situations. For each one of the aforementioned corporate functions, the study examines the findings of related research literature to highlight the advantages and concerns raised by the wiki usage and to identify specific solutions addressing them. Finally, based on the above findings, the study discusses various aspects of the wiki usage in the enterprise and identifies trends and future research directions on the field.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
wiki-technical-report.pdf | 506.43 KB |
Abstract
The proposed methodological framework reviews and uses knowledge from the field
of cognitive psychology in order to evaluate aspects of educational games. In
particular, we concentrate on two components of human cognition that play a central
role to learning, namely memory and motivation. After having reviewed theories in
the field, we created a questionnaire in order to evaluate educational games. The
questionnaire incorporates different experimental findings of cognitive psychology.
Especially, we have applied Maslow’s motivation theory, Behavioural findings on
reinforcement, experimental findings about attention and memory. We present the
results obtained from the evaluation of two games, PAC-MAN and Mega Jump. The
results confirmed the user ratings of the two games, showing that there seem to be
cognitive reasons for the success/failure of different games. Finally, lists of guidelines
for developers and instructors are included.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
cogneval.pdf | 723.47 KB |
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Digitization efforts and web presentations are currently on-going in many museums, archives, libraries and culture heritage institutions in general, exploiting the advent of WWW and digitization technologies. The main benefits from these efforts are exhibit cataloguing, their effective management, preservation and showcasing, and their presentation to the public through the WWW. However, many museums, especially the smaller ones, cannot afford a commercial product and resort to using simple static web pages for their web presence and exhibit presentation. Content Management Systems (CMS), especially open-source ones which come with practically zero-cost, are more and more frequently adopted by museums to create and maintain their website, since they simplify the creation and editing of the web pages and may be used by non-computer experts. In this work we present a module for the Drupal CMS, which provides functionality for (a) Database schema extension to accommodate museum exhibit and collection information (b) Digital exhibit representation (DER) management. (c) Provision of administration pages through which the museum personnel may enter and manage exhibit and collection information (d) WWW scowcasing and (e) Batch data import/export, to facilitate information exchange with other museums.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
drupal-abstract-v1.3.pdf | 16.06 KB |
drupal_A3.jpg | 919.83 KB |
Abstract:
In this paper we present a system that facilitates virtual museum development and usage. The system is based on a game engine, ensuring thus minimal cost and good performance, and includes provisions that enable museum curators design the virtual museum without any specialized knowledge. Besides visual and auditory information, museum curators may also provide metadata which provide additional information to the visitor, while they can be also exploited for searching for exhibits with certain properties. A guide is also included in the museum, to present additional information to the visitors and aid them throughout their tour.
The article is available through the ACM author-izer service:
Attachment | Size |
---|---|
mus-game-auth-tr.pdf | 478.14 KB |
Abstract:
The process of designing systems or products largely depends on a number of decisions, like "who do I design for?", "what should my product do?", "what are the user requirements?" etc. The developing teams usually base their decisions on experience and/or heuristics and this is particularly the case, in the development of online products and especially online exhibitions. The different solutions are frequently case studies of specific museums or institutions that wish to provide online content to actual or possible visitors. In addition, the interdisciplinary nature of the endeavor, involving museology, technology but also education, poses important design problems. In the following sections, we present a generic methodology for the design of online exhibitions, using top-down processes and transferable findings across museum types that wish to assist the designers during the early decision stages.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
mus-chap-tr.pdf | 362.82 KB |
Abstract:
This chapter presents an architecture for supporting the creation of adaptive virtual reality museums on the web. It argues whether the task of developing adaptive virtual reality museums is a complex one, presenting key challenges, and should thus be facilitated by means of a supporting architecture and relevant tools. The proposed architecture is flexible enough to cater for a variety of user needs, and modular promoting extensibility, maintainability and tailorability. Adoption of this architecture will greatly simplify the development of adaptive virtual reality museums, reducing the needed effort to exhibit digitisation and user profile specification; user profiles are further refined dynamically through the user data recorder and the user modelling engine, which provide input for the virtual environment generator.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
adaptiveVRmuseumsOnWeb.pdf | 224.7 KB |
Abstract:
The current paper describes an approach to designing and implementing a virtual environment comprising ten different museums. The number of museums as well as the variety of their exhibits lead to the adoption of a generalised strategy that catered for all museum presentation needs and allowed for future expansion. Furthermore, the system architecture supports the delivery of multimedia content either over the Internet or via a local immersive virtual reality installation.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
approach2design.pdf | 221.22 KB |
Abstract:
The emergence of the World Wide Web during the past few years has provided a medium for
communicating information faster and to more people than before. The technologies used allow for the
development of personalised, adaptive to the users’ needs, information systems. So far, the complexity
of the design and implementation of Virtual Environments has restricted their usage in locally
executed, stand-alone applications. In this paper we propose an architecture that permits and facilitates
the dynamic, on-the-fly creation of Virtual Environments on the Web that adapt to the users’
preferences and profiles. We focus on the algorithms available for creating an efficient virtual
environment generation engine. We illustrate the proposed architecture with examples from a case
study of a Virtual Museum.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Past years have seen the exploitation of multimedia techniques and lately the introduction of virtual reality methods to create new forms of presentation for museums' exhibitions. Virtual Reality can offer a number of advantages to museums, offering a way to overcome some common problems like the lack of space or the need of visitors to interact with the exhibits. A broad categorisation of virtual museums reveals that they vary from fully immersive cave systems to simple multimedia presentations. In our approach to develop a virtual reality museum we have designed a virtual environment (VE) where guests can visit a total of ten different museums. The processes of digitisation, architectural design and exhibit presentation are outlined and points of particular importance are explained. Exhibits from the real world museums have been digitised and integrated in the VE. The system has been implemented in two versions: one fully immersive and one with a stereo display.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
build-vr-mus-in-mus.pdf | 278.97 KB |
Abstract
A virtual environment system installed within a real museum can offer a number of advantages, which are discussed in this paper: overcoming the lack of exhibition space, responding to the need for interaction with certain exhibits, affording easy transfer of exhibitions to remote sites. This paper also presents an approach towards designing and developing a virtual reality museum comprising ten different museums. The processes of digitisation, architectural design and exhibit presentation are outlined and points of particular importance are explained. Exhibits from real world museums have been digitised and integrated in this VE.
Article available through the ACM AUthro-izer service:
Attachment | Size |
---|---|
design-vr-mus-in-mus.pdf | 133.37 KB |
Abstract
The paper presents an environment that enables museum curators to catalogue and publish on the web exhibits in multiple languages and media including 3D, video, images. The system is extendable to accommodate new media types, languages, exhibits, information categories, etc. Visitors have the potential to formulate dynamic personalised exhibit collections using search mechanisms provided by the system.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
FacilitatingVRMuseumsWebPresence.pdf | 443.59 KB |
Abstract:
When creating a virtual environment open to the public a number of challenges have to be addressed. The equipment has to be chosen carefully in order to be be able to withstand hard every-day usage, the application has not only to be robust and easy to use, but has also to be appealing to the user, etc. The current paper presents findings gathered from the creation of a multi-thematic virtual museum environment to be offered to visitors of real world museums. A number of design and implementation aspects are described along with an experiment designed to evaluate alternative approaches for implementing the navigation in a virtual museum environment. The paper is concluded with insights gained from the development of the Virtual Museum and portrays future research plans.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
RealExhibitionsInVR-museums.pdf | 475.18 KB |
Abstract:
This paper presents an innovative approach based on social-network gaming, which will extract players’ cognitive styles for personalization purposes. Cognitive styles describe the way individuals think, perceive and remember information and can be exploited to personalize user interaction. Questionnaires are usually employed to identify cognitive styles, a tedious process for most users. Our approach relies on a Facebook game for discovering potential visitors’ cognitive styles with an ultimate goal of enhancing the overall visitors' experience in the museum. By hosting such a game on the museum’s webpage and on Facebook, the museum aims to attract new visitors, as well as to support the user profiling process.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
gala13.pdf | 114.19 KB |
Abstract
Museums have started to realise the potential of new technologies for the development of edutainment content and services for their visitors. Virtual reality technologies promise to offer a vivid, enjoyable experience to the museums guests, but the cost in time, effort and resources can prove to be overwhelming. In this paper, we propose the use of 3D game technologies for the purpose of developing affordable, easy to use and pleasing virtual environments. To this end, we present a case study based on an already developed version of a virtual museum and a newly implemented version that uses game technologies. The informal assessment indicates that game technologies can offer a prominent and viable solution to the need for affordable desktop virtual reality systems.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
vr-museums-with-game-technology.pdf | 601.84 KB |
Abstract:
This special issue explores the extent to which virtual reality (VR) is affecting the creation of an electronic society. E-Society is a broad term used to describe a research area covering aspects of digital technologies for large user communities. Recent years have seen the emergence of various electronic services in an attempt to facilitate everyday life and improve the way common tasks are being carried out. Ôhe term e-Society covers a wide range of applications from e-government, e-democracy, and e-business to e-learning and e-health. In order for VR to contribute to the creation and advancement of e-Society, a number of issues have to be tackled. A successful VR system has to find a balance between the hardware requirements, user interaction methods, content presentation and the effort required for development and maintenance. Hardware requirements define to a large degree the extent to which an end-user can afford to execute the VR system at her home. User interaction methods have to cater for the variety of users¢ needs. Overall, design and implementation of a successful and engaging VR system is a rather difficult and complex task which requires increased effort in human power and resources in comparison to typical window based applications. Flexibility in development and subsequently maintenance of such a system can be achieved by adopting techniques already present in rapid application development environments, like abstraction, automatic code generation and reusability.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
editorial-vr-spec.pdf | 82.6 KB |
Abstract:
There is a wide range of meta-data standards for the documentation of museum related information, such as CIDOC-CRM; these standards focus on the description of distinct exhibits. In contrast, there is a lack of standards for the digitization and documentation of the routes followed and information provided by museum guides. In this work we propose the notion of the narrative, which can be used to model a guided museum visit. We provide a formalization for the narrative so that it can be digitally encoded, and thus preserved, shared, re-used, further developed and exploited, and also propose an intuitive visualization approach.
Abstract:
In this short paper we examine the suitability of the Google Cardboard as a means for the delivery of personalized cultural experiences. Specifically, we develop the content and create the application required in order to provide highly personalized visits to the Archaeological Museum in Tripolis, Greece. We also examine the usability issues related to the use of Google Cardboards. Early results are promising, and based on them we also outline the next steps ahead.
Abstract:
The Human-Computer Interaction and Virtual Reality Lab, at the Department of Informatics and Telecommunications of the University of Peloponnese, aims to conduct high quality research in areas related to the analysis, design, develop-ment, and evaluation of HCI and VR systems and applications, and in parallel to support the teaching requirements of the department in the respective field. Over the last years the HCI-VR lab particularly focuses on Cultural Heritage and de-velops technologies primarily for spaces of cultural heritage that cover the diverse needs of heterogeneous audiences providing holistic visitor experience. The HCI-VR lab is actively participating in National and European projects on Cultural Heritage, such as FP7 Experimedia (https://hci-vr.dit.uop.gr/experimedia), H2020 CrossCult (www.crosscult.eu) and multiple projects from the National Strategic Reference Framework.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
The continuing need for more effective information retrieval has lead to the creation of the notions of the semantic web and personalized information management, areas of study that very often employ ontologies to represent the semantic context of a domain. Consequently, the need for effective ontology visualization for design, management and browsing has arisen. There are several ontology visualizations available through the existing ontology management tools, but not as many evaluations to determine their advantages and disadvantages and their suitability for various ontologies and user groups. This work presents the preliminary results of an evaluation of four visualization methods in Protégé.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
comparative-ontoviz.pdf | 310.17 KB |
Abstract:
Digital libraries and historical archives are increasingly employing visualization systems to facilitate the information retrieval and knowledge extraction tasks of their users. Typically, each organization employs a single visualization system, which may not suit best the needs of certain user groups, specific tasks, or properties of document collections to be visualized. In this paper we present a context-based adaptive visualization environment, which embeds a set of visualization methods into a visualization library, from which the most appropriate one is selected for presenting information to the user. Methods are selected by examining parameters related to the user profile, system configuration and the set of data to be visualized, and employing a set of rules to assess the suitability of each method. The presented environment additionally monitors user behavior and preferences to adapt the visualization method selection criteria.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
adaptive-visu.pdf | 83.64 KB |
Abstract
Hierarchically structured data collections often need to be visualized for the purposes of digital information management and presentation. File browsing, in particular, has an inherent hierarchical structure and plays an important role in the context of Personal Information Management (PIM). A multitude of file browsers are nowadays available, offering different functionalities, while users adopt diverse practices and habits for browsing activities. In this paper, we investigate these aspects to obtain insights into their advantages and disadvantages and suggest solutions in the area of PIM, as well as in other domains employing similar visualization paradigms. The presented study focuses on the two most widespread visualizations used by file browsers, namely the indented list and zoomable interface paradigms, and assesses their effectiveness for various tasks and contexts, both by exploiting results on existing evaluations on hierarchy visualizations and folder hierarchy visualizations in particular, and by conducting an interview-based user study.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
we2-tr.pdf | 93.53 KB |
Abstract:
Novel and intelligent visualization methods are being developed in order to accommodate user searching and browsing tasks, including new and advanced functionalities. Besides, research in the field of user modeling is progressing in order to personalize these visualization systems, according to its users' individual profiles. However, employing a single visualization system, may not suit best any information seeking activity. In this paper we present a visualization environment, which is based on a visualization library, i.e. is a set of visualization methods, from which the most appropriate one is selected for presenting information to the user. This selection is performed combining information extracted from the context of the user, the system configuration and the data collection. A set of rules inputs such information and assigns a score to all candidate visualization methods. The presented environment additionally monitors user behavior and preferences to adapt the visualization method selection criteria.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
iui-chapter.pdf | 216.77 KB |
Abstract:
The visualization of hierarchies is very important for digital information management and presentation systems. Especially in the context of Personal Information Management, file browsers play a very important role. Currently the most common file browser visualizations are Windows Explorer and the simple zoomable visualization offered by Microsoft Windows. This work explores the issue of file browser visualization through a user study based on interviews and an experiment.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
explorer-eval_final.pdf | 150.49 KB |
Abstract:
TThe need for effective ontology visualization for design, management and browsing has arisen as a result of the progress in the areas of Semantic Web and Personal Information Management. There are several ontology visualizations available through existing ontology management tools, but not as many evaluations to determine their advantages and disadvantages and their suitability for various ontologies and user groups. This work presents selected results of an evaluation of four visualization methods in Protégé.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-eval-tr.pdf | 246.87 KB |
Abstract
The on-going progress in the area of digital libraries has lead to the beginning of a digitization effort in Historical Archives, as well. The requirements of historical research, which works with histories of entities and incomplete information, create the need for supplementary tools to support users in handling the digitized content. This work is based on a user study of historian information retrieval methods in order to create a set of tools for the context of historical archives, which will facilitate historical data storage, management and visualization.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-archive-tr.pdf | 165.66 KB |
Abstract:
Incorporating digital tools in the business and scientific research workflows is at the moment an on-going process, challenging and demanding as every domain has its own needs in terms of data models and information retrieval methods. The information in some domains involves entity evolution, a characteristic that introduces additional tasks, such as finding all evolution stages of an entity, and poses additional requirements for the information retrieval process. In this paper we present a user study aiming to investigate the effectiveness of current ontology browsing and visualization
methods for supporting users in tasks involving research on entity evolution.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-evolution-visu-tr.pdf | 4.03 MB |
Abstract:
Hierarchical data structures are one of the most commonly used data structures in computer science, and therefore numerous methods and techniques have been proposed for their visualization. In this paper, we present our findings from a user study, in which a number of folder visualization environments were evaluated to assess (a) how efficiently a number of tasks can be performed within the different environments (b) the extent to which using a particular visualization may help the user acquire an accurate cognitive image of the hierarchy structure and its contents and (c) the overall user experience from using a particular visualization environment. The visualization environments considered are representative of major visualization paradigms (zoomable user interfaces, context+focus and space-filling), while both 2D and 3D environments have been included.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
visualizing-hierarchies-abstract.pdf | 48.54 KB |
Abstract:
Most ontology development methodologies and tools for ontology management deal with ontology snapshots, i.e. they model and manage only the most recent version of ontologies, which is inadequate for contexts where the history of the ontology is of interest, such as historical archives. This work presents a modeling for entity and relationship timelines in the Protégé tool, complemented with a visualization plug-in, which enables users to examine entity evolution along the timeline.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-time-short.pdf | 101.01 KB |
Abstract:
This paper presents the results of a study on how historians conduct research in a historical archive, and the methodologies they use while searching. Historic research involves finding, using, interpreting and correlating information within primary and secondary sources, in order to understand past events. The collection of historical data is accomplished through methodical and comprehensive research in primary and secondary sources. An important factor in our study was to understand what kind of data and/or information historians are looking for in a library/historical archive, either printed or digitized, and which research methodologies or research models they use while they investigate a historical archive. Since this issue has not been addressed insofar, and therefore there are no methods for elucidating research methodologies or research models that historians employ / use, we formulated a questionnaire comprising of seven information retrieval tasks commonly addressed in the context of historic research. History researchers were asked to describe in detail how they would proceed in searching for the information they need for completing these tasks. Through this procedure we aimed to investigate the different ways a historian can use to tackle a specific question, examine whether there exists a common research methodology, and the historic researchers' expectations and preferences. The insight gained from this investigation can be used for educational purposes, since it could be useful in the creation / development of a methodology for conducting research on historical information. Furthermore, the findings can be exploited in the context of organizing documents within historical source repositories, so as to facilitate the retrieval of documents by historians; finally the presented findings can serve as a preliminary requirement analysis phase for building tools that will enable historians to access more rapidly and fully the information they need.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
research-methodology-tr.pdf | 170.77 KB |
Abstract:
Ontologies have been proven invaluable tools both for the semantic web and for personal information management. In the context of a historical archive an ontology may provide mean-ingful and efficient support for search tasks as well as be used as a tool for storage and presentation of historical data. The creation however of such an ontology is complex, since the digitized archive documents are not in text format and the concepts that must be captured may vary among different time periods. This work presents a user-centric methodological approach for ex-tracting the ontology of an historical archive focusing on the evaluation issues related to this process. The approach is exemplified through cases from its application in the University of Athens Historical Archive.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-meth.pdf | 232.45 KB |
Abstract:
User profiling is commonly employed nowadays to enhance usability as well as to support personalization, adaptivity and other user-centric features. Insofar, application designers model user profiles mainly in an ad-hoc manner, hindering thus application interoperability at the user profile level, increasing the amount of work to be done and the possibility of errors or omissions in the profile model. This work aims at creating a user profile ontology that incorporates concepts and properties used to model the user profile. Existing literature, applications and ontologies related to the domain of user context and profiling have been taken into account in order to create a general, comprehensive and extensible user model. This ontology can be used as a reference model, in order to alleviate the aforementioned issues. The model, available for download, is exemplified through its application in two different areas, personal information management and adaptive visualization.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-user-final.pdf | 176.1 KB |
Abstract:
Incorporating digital tools in the business and scientific research workflows is at the moment an on-going process, challenging and demanding as every domain has its own needs in terms of data models and information retrieval methods. The information in some domains involves entity evolution, a characteristic that introduces additional tasks, such as finding all evolution stages of an entity, and poses additional requirements for the information retrieval process. In this paper, we present a user study aiming to investigate how the different aspects of ontology modelling affect the performance and effectiveness of users regarding information retrieval tasks that are carried out using visualization methods. The results of the user study are analyzed and guidelines for ontology design are offered.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Creating stories for exhibitions is a fascinating and in parallel laborious task. As every exhibition is designed to tell a story, museum curators are responsible for analyzing each exhibit in order to extract messages that form a story and position accordingly the objects in correct order within the museum space. In this context, we analyze how the technological advances in the fields of sensors and the Internet of Things can be utilized in order to construct a “smart space”, where exhibits can communicate with the visitors and to each other and can be organized automatically so that they can generate rich, personalized, coherent and highly stimulating experiences. We present the architecture of the system named “exhiSTORY”, that intends to provide the appropriate infrastructure to be used in museums and places where exhibitions are held, in order to support smart exhibits. We discuss and analyze the architecture of the system and the ways of its application in a cultural space.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
This paper takes as its premise that the web is a place of action, not just information, and that the purpose of global data is to serve human needs. The paper presents several component technologies, which together work towards a vision where many small micro-applications can be threaded together using automated assistance to enable a unified and rich interaction. These technologies include data detector technology to enable any text to become a start point of semantic interaction; annotations for web-based services so that they can link data to potential actions; spreading activation over personal ontologies, to allow modelling of context; algorithms for automatically inferring ¡typing¢ of web-form input data based on previous user inputs; and early work on inferring task structures from action traces. Some of these have already been integrated within an experimental web-based (extended) bookmarking tool, Snip!t, and a prototype desktop application On Time, and the paper discusses how the components could be more fully, yet more openly, linked in terms of both architecture and interaction. As well as contributing to the goal of an action and activity-focused web, the work also exposes a number of broader issues, theoretical, practical, social and economic, for the Semantic Web.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
jws-web-of-action-tr.pdf | 1.21 MB |
Abstract:
Most ontology development methodologies and tools for ontology management deal with ontology snapshots, i.e. they model and manage only the most recent version of ontologies, which is inadequate for contexts where the history of the ontology is of interest, such as historical archives. This work presents a set of requirements for the modeling and visualization of a temporal ontology used as a tool for the representation of historical information. In accordance to these requirements, a visualization plug-in was designed and implemented, featuring a set of tools that enable users to efficiently examine ontology temporal characteristics such as class and instance evolution along the timeline.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-time-final-rcis.pdf | 284.81 KB |
Abstract:
Historic research involves finding, using and correlating information within primary and secondary sources, in order to communicate an understanding of past events. In this process, historians employ their scientific knowledge, experience and intuition to formulate queries (who was involved in an event, when did an event occur etc), and subsequently try to locate the pertinent information from their sources. In this paper, we investigate how historians formulate queries, which query terms are chosen, and how historians proceed in searching for related information in sources. The insight gained from this investigation can be subsequently used for organizing documents within historical source repositories and building tools that will enable historians to access the needed information more rapidly and fully.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
metho-hist-tr.pdf | 288.97 KB |
Abstract:
Human History, is a huge mesh of interrelated facts and concepts, spanning beyond borders, encompassing global aspects and finally constituting a shared, global experience. This is especially the case regarding European history, which is highly interconnected by nature; however, most History-related experiences that are today offered to the greater public, from schools to museums, are siloed. The CrossCult project aims to provide the means for offering citizens and cultural venue visitors a more holistic view of history, in the light of cross-border interconnections among pieces of cultural heritage, other citizens viewpoints and physical venues. To this end, the CrossCult project will built a comprehensive knowledge base encompassing information and semantic relationships across cultural information elements, and will provide the technological means for delivering the contents of this knowledge base to citizens and venue visitors in a highly personalized manner, creating narratives for the interactive experiences that maximise situational curiosity and serendipitous learning. The CrossCult platform will also exploit the cognitive/emotional profiles of the participants as well as temporal, spatial and miscellaneous features of context, including holidays and anniversaries, social media trending topics and so forth.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Draft paper version | 404.61 KB |
Abstract:
Users nowadays need to manage large amounts of information, including documents, e-mails, contacts, and multimedia content. To facilitate the tasks of organisation, maintenance, and retrieval of personal information, a number of semantics-based methods have emerged; these methods employ (personal) ontologies as an underlying infrastructure for organising and querying the personal information space. In this paper we present OntoFM, a novel personal information management tool that offers a mindmap-inspired interface to facilitate user interactions with the information base. Besides serving as an information retrieval aid, OntoFM allows the user to specify and update the semantic links between information items, constituting thus a complete personal information management tool.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
EDBTICDT14-CAMERA-v4.pdf | 1.1 MB |
Abstract:
OntoFM is a novel file manager that bases its interactivity on the user¢s personal ontology, offering semantic browsing and searching mechanisms for locating files, a mind map inspired ontology visualization, and simple-to-use intuitive functionality that encourages less experienced users. The implementation of OntoFM is based on Protege, an extensible open source ontology editor and knowledge-base framework. The file manager is implemented as a Protege tab widget, retrieving information from your personal ontology. Ontology visualization is based upon the OntoGraf plug-in which has been extended to comply with the mind map paradigm. Currently the ontology visualization pane permits only navigational functions, while other ontology management functions have been hidden to ease the complexity of the user interface.
Demo material:http://www.uop.gr/~trifon/OntoFM
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Ontologies, as knowledge engineering tools, allow information to be modelled in ways resembling to those used by the human brain, and may be very useful in the context of personal information management (PIM) and Task Information Management (TIM). This work proposes the use of ontologies as a long-term knowledge store for PIM-related information, and the use of spreading activation over ontologies in order to provide context inference to tools that support TIM. Details on the ontology creation and content are provided, along with a full description of the spreading activation algorithm and its preliminary evaluation.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
BI_tr.pdf | 554.37 KB |
Abstract:
Ontologies have been proven invaluable tools in areas like the semantic web and personal information management. There have been many research efforts to create ontologies and supporting tools for Natural Sciences and Biology in particular (e.g. the GO (http://www.geneontology.org/) ontology and supporting tools). However, the domain of History, the science of studying, recording and organizing the knowledge of the past, has yet to benefit from adopting ontologies. In this work, we present our findings in this area, focusing on the aspects of ontology modeling and ontology visualization.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
poster-history-onto-v1b.pdf | 12.47 KB |
poster-smaller.pdf | 2.76 MB |
Abstract:
Ontologies, as sets of concepts and their interrelations in a specific domain, have proven to be a useful tool in the areas of digital libraries, the semantic web and personalized information management. As a result, there is a growing need for effective ontology visualization for design, management and browsing. There exist several ontology visualization methods and also a number of techniques used in other contexts that could also be adapted for ontology representation. The purpose of this work is to present these techniques and categorize their characteristics and features in order to assist method selection and promote future research in the area of ontology visualization.
Article available through the ACM Author-izer service:
Attachment | Size |
---|---|
onto-vis-survey-final.pdf | 2.04 MB |
Abstract:
In the age of digital information more and more digital libraries and historical archives are using information systems in order to facilitate the document retrieval and provide better visualization of the search results and document presentation. Much research has been done in the field of digital libraries, but in the case of historical archives, which have particular needs, this is not the case. To this end, we investigate the use of new tools, which are based on the ontology of the historical archive in order to provide a new and effective method for document retrieval in a dynamic environment which will take into account the collaboration needs of the users.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
onto-aided.pdf | 213.3 KB |
Abstract:
Ontologies offer a flexible and expressive layer of abstraction, very useful for capturing the semantics of information repositories and facilitating their retrieval either by the user or by the system to support user tasks. This work presents an ontology-based user profiler, in the context of a Personal Interaction Management System (PIMS). The profiler, based on an ontology of the users¢ domain, enables them to create their personal ontology by initially choosing one of the available template ontologies as a starting point, which they subsequently populate and customize. The profiler employs a web interface which allows users to populate their personal ontology through forms, hiding ontology complexities and peculiarities. Forms are dynamically generated through ontology views, which are specified by ontology designers.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
PIM-chi-profiler_v1.6.pdf | 422.69 KB |
Abstract:
Nowadays, more and more cultural venues tend to utilize social media as a main tool for marketing, spreading their messages, engaging public and raising public awareness towards culture. It comes to a point where the massive of content in social media makes it a tedious procedure to contact the appropriate audience, the people that would really be stimulated by cultural information. In this notion, we assume that establishing conversations of high impact can possibly guide the cultural venues to audiences that can benefit more. These conversations usually include the so called influencers, users whose opinion can affect many people on social media; the latter usually referred to as followers. In this research paper we examine the characteristics of the influencers that can affect the procedures of a cultural venue on social media. The research is done within the scope of "CrossCult" EU funded project.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Wishing to connect cultural heritage, games and social networks, the present work describes games to be used within the framework of a European H2020 project. For the purposes of supporting the museum visit, before, during and after, 5 games were designed for social networks to accomplish user profiling, to promote the museum and the application through social network dissemination, to introduce museum items and themes and to also function as visit souvenirs. The games are also presented in a generic framework for games in cultural heritage, which has been used successfully in the past.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
gala_2016.pdf | 784.05 KB |
Abstract:
This paper describes methods to allow spreading activation to be used on web-scale information
resources. Existing work has shown that spreading activation can be used to model context over
small personal ontologies, which can be used to assist in various user activities, for example, in autocompleting
web forms. This previous work is extended and methods are developed by which large
external repositories, including corporate information and the web, can be linked to the user’s
personal ontology and thus allow automated assistance that is able to draw on the entire web of data.
The basic idea is augment the personal ontology with cached data from external repositories, where
the choice of what data to fetch or discard is related to the level of activation of entities already in
the personal ontology or cached data. This relies on the assumption that the working set of highly
active entities is relatively small; empirical results are presented, which suggest these assumptions
are likely to hold. Implications of the techniques are discussed for user interaction and for the social
web. In addition, warm world reasoning is proposed, applying rule-based reasoning over activate
entities, potentially merging symbolic and sub-symbolic reasoning over web-scale knowledge bases.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
spreadingact-wsr-tr.pdf | 644.21 KB |
Abstract:
Various forms of spreading activation has been used in a number of web systems, not least in the PageRank algorithm. In our own work we have been using this as a technique for managing context over small and large ontologies, and both our own work and that in LarKC suggests that spreading activation has the potential to aid in reasoning over web-scale data sets including the growing set of linked open data resources. Of particular importance is that spreading activation can be applied locally to a dynamic self-selecting working set of an (practically) unbound linked data collection, as well as globally to the entire collection. However, this potential does not come without problems, some concerning the nature of the algorithm on any large data set, and some more to do with the particular nature of linked open data.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
WebSci2011-Spreading-Act.pdf | 398.74 KB |
Abstract:
In this paper, we examine how social media can be linked to cultural heritage and in particular how we can incorporate games, social networks, history reflection and culture. More specifically, we explore the following aspects: (a) how social media sites can be integrated into the museum user experience (b) how user interactions within the social media, both within the context of the museum experience and outside it, can be exploited to enhance the quality of recommendations made to the users, (c) how trending topics from social media can be used to link museum exhibits with today’s topics of interest and (d) how multi-level related terms extraction from social media data can lead to proposals for reflections to users. The end goal is to provide increased stimuli for users to study exhibits deeper and reflect on them, as well as to trigger discussion between the users, thus maximizing the impact of a museum visit.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
roles-tr.pdf | 319.4 KB |
Abstract:
Social media have gained the majority of attention on the Internet having an extreme number of daily visitors worldwide. The amount of information exchanged is vast, while users have become equally producers and consumers of data. Words like “trending”, “influencers”, “likes” and “viral” are in the daily agenda of data analyzers, as they are associated with factors that play an important role in the influence of social media content to its audience. These aspects are nowadays strongly taken into account by organizations that want to draw the public’s attention to the content they deliver, and in this context cultural institutions have already started to take under great consideration not only their presence in social media, but also the monitoring and exploitation of social media dynamics. In this paper we propose a method that can enable cultural venues to benefit from matches between their own content and ongoing discussions on social media. More specifically we extract trending topics that can be related semantically with the content of a cultural institute and examine how a venue can benefit by exploiting these matches. The proposed approach has been developed in the scope of the “CrossCult” H2020 project, and has been experimentally tested by analyzing the case of Twitter in Greek language.
Abstract:
CrossCult is a newly started project that aims to make reflective history a reality in the European cultural context. In this paper we examine how the project aims to take advantage of advances in semantic technologies in order to achieve its goals. Specifically, we see what the quest for reflection is and, through practical examples from two of the project's flagship pilots, explain how semantics can assist in this direction.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
semantics in crosscult.pdf | 1.56 MB |
Abstract:
In this paper, we present the vision of an open source learning analytics platform, able to harvest data from different sources, including e-learning platforms and environments, registrar's information systems, alumni systems, etc., so as to provide all stakeholders with the necessary functionality to make decisions on the learning process. The platform's architecture is modular, allowing the introduction of new functionality or connection to new systems to collect needed data. All data can be analyzed and presented though interactive visualizations to find correlations between metrics, to make predictions for students or student groups, to identify best practices for instructors and let them explore 'what-if' scenarios, to offer students personalized recommendations and personalized detailed feedback, etc. Our objective is to inform and empower all stakeholders to improve the learning experience.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
learning-analytics-tr.pdf | 466.61 KB |
Attachment | Size |
---|---|
User profile ontology version 1, RDF version | 3.68 KB |
User profile ontology version 1, Protege 3.1 version | 6.63 KB |
Abstract:
CrossCult H2020 is a European project, the aim of which is the reflection of history in a cultural setting. In this paper, we describe how social media can be linked to cultural heritage and in particular how we can incorporate games, social networks, history reflection and culture. The paper presents the case study of one of the project pilots, to show how history reflection can be enhanced with the use of social networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
SMAP_2016.pdf | 446.47 KB |
Abstract:
Recent research in the domain of Personal Information Management has recognized the need for a paradigm shift towards a more activity-oriented system. Ontologies, as semantic networks with a structure not dissimilar to the one used by the human brain for storing long-term knowledge, may be very useful as the basis of such a system. This work proposes the use of spreading activation over ontologies in order to provide to a task-based system and its associated tools with methods to record semantics related to documents and tasks and to support user context inference.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
spreading-act-IUI-CSKGOI.pdf | 476.23 KB |
Abstract:
The recent progress of the World Wide Web has created new needs for information sharing in virtual communities. WhereRU is a multiuser GPS position reporting system that allows users to make their location publicly available as well as associate it with information on places, persons and events that may later also serve as reminders of the their experiences when traveling.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
whereRU-tr.pdf | 403.01 KB |
Abstract:
Museum exhibitions are designed to tell a story; this story is woven by curators and in its context a particular aspect of each exhibit, fitting to the message that the story is intended to convey, is highlighted. Adding new exhibits to the story requires curators to identify for each exhibit its aspects that fit to the message of the story and position the exhibit at the right place in the story thread. The availability of rich semantic information for exhibits, allows for exploiting the wealth of meanings that museum exhibits express, enabling the automated or semi-automated generation of practically countless stories that can be told. Personalization algorithms can then be employed to choose from these stories the ones most suitable for each individual user, based on the semantics of the stories and information within the user profile. In this work we examine how opportunities arising from technological advances in the fields of IoT and semantics can be used to develop smart, self-organizing exhibits that cooperate with each other and provide visitors with comprehensible, rich, diverse, personalized and highly stimulating experiences. These notions are included in the design of a system named exhiSTORY, which also exploits previously ignored information and identifies previously unseen semantic links. We present the architecture of the system and discuss its application potential.
Abstract:
This work is an extension of the Protégé tool to accommodate the modeling and presentation of entity history, i.e. past values of properties and/relationships; each such value is timestamped with the period that it was (or will be) valid in the real world. To this end, the presented extension provides:
The extension can be used in contexts that the modeling of entities' history is important, such as historical archives, museums, etc.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In contemporary internet architectures, including server farms and blog aggregators, web log data may be scattered among multiple cooperating peers. In order to perform content personalization through provision of recommendations on such architectures, it is necessary to employ a recommendation algorithm; however the majority of such algorithms are centralized, necessitating excessive data transfers and exhibiting performance issues when the number of users or the volume of data increase. In this paper we propose an approach where the clickstream information is distributed to a number of peers, which cooperate for discovering frequent patterns and for generating recommendations, introducing (a) architectures that allow the distribution of both the content and the clickstream database to the participating peers and (b) algorithms that allow collaborative decisions on the recommendations to the users, in the presence of scattered log information. The proposed approach may be employed in various domains, including digital libraries, social data, server farms and content distribution networks.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
dist-recom-sys.pdf | 306.54 KB |
Abstract:
This paper investigates the effect that smart routing and recommendations can have on improving the Quality of Experience of museum visitors. The novelty of our approach consists of taking into account not only user interests but also their visiting styles, as well as modeling the museum not as a sterile space but as a location where crowds meet and interact, impacting each visitor’s Quality of Experience. The investigation is done by an empirical study on data gathered by a custom-made simulator tailored for the museum user routing problem. Results are promising and future potential and directions are discussed.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
masie2013_submission_2.pdf | 520.75 KB |
We introduce a novel knowledge-based recommendation algorithm for leisure time information to be used in social networks, which enhances the state-of-the-art in this algorithm category by taking into account (a) qualitative aspects of the recommended places (restaurants, museums, tourist attractions etc.), such as price, service and atmosphere, (b) influencing factors between social network users, (c) the semantic and geographical distance between locations and (d) the semantic categorization of the places to be recommended. The combination of these features leads to more accurate and better user-targeted leisure time recommendations.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
The most widely used similarity metrics in collaborative filtering, namely the Pearson Correlation and the Adjusted Cosine Similarity, adjust each individual rating by the mean of the ratings entered by the specific user, when computing similarities, due to the fact that users follow different rating practices, in the sense that some are stricter when rating items, while others are more lenient. However, a user’s rating practices change over time, i.e. a user could start as lenient and subsequently become stricter or vice versa; hence by relying on a single mean value per user, we fail to follow such shifts in users’ rating practices, leading to decreased rating prediction accuracy. In this work, we present a novel algorithm for calculating dynamic user averages, i.e. time-in-point averages that follow shifts in users’ rating practices, and exploit them in both user-user and item-item collaborative filtering implementations. The proposed algorithm has been found to introduce significant gains in rating prediction accuracy, and outperforms other dynamic average computation approaches that are presented in the literature.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
The most widely used similarity metrics in collaborative filtering, namely the Pearson Correlation and the Adjusted Cosine Similarity, adjust each individual rating by the mean of the ratings entered by the specific user, when computing similarities, due to the fact that users follow different rating practices, in the sense that some are stricter when rating items, while others are more lenient. However, a user’s rating practices change over time, i.e. a user could start as lenient and subsequently become stricter or vice versa; hence by relying on a single mean value per user, we fail to follow such shifts in users’ rating practices, leading to decreased rating prediction accuracy. In this work, we present a novel algorithm for calculating dynamic user averages, i.e. time-in-point averages that follow shifts in users’ rating practices, and exploit them in both user-user and item-item collaborative filtering implementations. The proposed algorithm has been found to introduce significant gains in rating prediction accuracy, and outperforms other dynamic average computation approaches that are presented in the literature.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Recommender systems are based on information about users' past behavior to formulate recommendations about their future actions. However, as time goes by the interests and likings of people may change: people listen to different singers or even different types of music, watch different types of movies, read different types of books and so on. Due to this type of changes, an amount of inconsistency is introduced in the database since a portion of it does not reflect the current preferences of the user, which is its intended purpose.
In this paper, we present a pruning technique that removes old aged user behavior data from the ratings database, which are bound to correspond to invalidated preferences of the user. Through pruning (1) inconsistencies are removed and data quality is upgraded, (2) better rating prediction generation times are achieved and (3) the ratings database size is reduced. We also propose an algorithm for determining the amount of pruning that should be performed, allowing the tuning and operation of the pruning algorithm in an unsupervised fashion.
The proposed technique is evaluated and compared against seven aging algorithms, which reduce the importance of aged ratings, and a state-of-the-art pruning algorithm, using datasets with varying characteristics. It is also validated using two distinct rating prediction computation strategies, namely collaborative filtering and matrix factorization. The proposed technique needs no extra information concerning the items' characteristics (e.g. categories that they belong to or attributes' values), can be used in all rating databases that include a timestamp and has been proved to be effective in any size of users-items database and under two rating prediction computation strategies.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In this technical report, we present the experimental findings from applying an algorithm that considers virtual near neighbors (VNNs) in the rating prediction formulation process, in order to increase coverage in the context of sparse datasets.
To this end, the algorithm is applied to seven sparse datasets, which are widely used in recommender system research. Additionally, the algorithm is applied to one dense dataset, in order to gain insight on the performance of the proposed algorithm in this class of datasets, as well.
In short, the algorithm introduces the concept of VNNs i.e. virtual users, which are created from the combination of real ones, in order to be used as candidate NNs in the rating prediction computation process.
In these experiments, the optimal values for the parameters that are used in the algorithm are investigated and more specifically, the thresholds that two individual users can constitute a VNN.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
soda-TR-19001.pdf | 872.56 KB |
Abstract:
In this paper, we introduce a novel recommendation algorithm, which exploits data sourced from web services provided by the Internet of Things in order to produce more accurate venue recommendations. The proposed algorithm provides added value for the web services offered by the Internet of Things and enhances the state-of-the-art in this algorithm category by taking into account (a) web of things data regarding the contexts of the user and the context of the venues to be recommended (restaurants, movie theatres, etc.), such as the user’s geographical position, road traffic and weather conditions, (b) qualitative aspects of the venues, such as price, atmosphere or service, (c) the semantic similarity of venues and (d) the influencing factors between social network users, derived from user participation in social networks. The combination of these features leads to more accurate and better user-targeted recommendations. We also present a framework which incorporates the above characteristics, and we evaluate the presented algorithm, both in terms of performance and recommendation quality.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
One of the major problems that social networks face is the continuous production of successful, user-targeted information in the form of recommendations, which are produced exploiting technology from the field of recommender systems. Recommender systems are based on information about users’ past behavior to formulate recommendations about their future actions. However, as time goes by, social network users may change preferences and likings: they may like different types of clothes, listen to different singers or even different genres of music and so on. This phenomenon has been termed as concept drift. In this paper: (1) we establish that when a social network user abstains from rating submission for a long time, it is a strong indication that concept drift has occurred and (2) we present a technique that exploits the abstention interval concept, to drop from the database ratings that do not reflect the current social network user’s interests, thus improving prediction quality.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Full text; also available at http://www.mdpi.com/2227-9709/5/2/21 | 869.89 KB |
Abstract:
Users that populate ratings databases, such as IMDB, follow different marking practices, in the sense that some are stricter, while others are more lenient. This aspect has been captured by the most widely used similarity metrics in collaborative filtering, namely the Pearson Correlation and the Adjusted Cosine Similarity, which adjust each individual rating by the mean of the ratings entered by the specific user, when computing similarities. However, relying on the mean value presumes that the users' marking practices remain constant over time; in practice though, it is possible that a user's marking practices change over time, i.e. a user could start as strict and subsequently become lenient, or vice versa. In this work, we propose an approach to take into account marking practices shifts by (1) introducing the concept of dynamic user rating averages which follow the users' marking practices shifts, (2) presenting two alternative algorithms for computing a user's dynamic averages and (3) performing a comparative evaluation among these two algorithms and the classic static average (unique mean value) that the Pearson Correlation uses.
Abstract:
In this paper, we introduce a pruning algorithm which removes aged user ratings from the rating database used by collaborative filtering algorithms, in order to (1) improve prediction quality and (2) minimize the rating database size, as well as the rating prediction generation time. The proposed algorithm needs no extra information concerning the items' characteristics (e.g. categories that they belong to or attributes' values) and can be used with all rating databases that include a timestamp. Furthermore, we propose and validate a method for identifying the most prominent combination of a pruning algorithm and a pruning level for datasets, allowing thus to perform the selection of pruning algorithm and pruning level in an unsupervised fashion.
Abstract:
When rating predictions are computed in user-user collaborative filtering, each individual rating is typically adjusted by the mean of the ratings entered by the specific user. This practice takes into account the fact that users follow different rating practices, in the sense that some are stricter when rating items, while others are more lenient. However, users’ rating practices may also differ in rating variability, in the sense that some user may be entering ratings close to her mean, while another user may be entering more extreme ratings, close to the limits of the rating scale. In this work, we (1) propose an algorithm that considers users’ ratings variability in the rating prediction computation process, aiming to improve rating prediction quality and (2) evaluate the proposed algorithm against seven widely used datasets considering three widely used variability measures and two user similarity metrics. The proposed algorithm, using the “mean absolute deviation around the mean” variability measure, has been found to intro-duce considerable gains in rating prediction accuracy, in every dataset and under both user similarity metrics tested.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Collaborative filtering systems analyze the ratings databases to identify users with similar likings and preferences, termed as near neighbors, and then generate rating predictions for a user by examining the ratings of his near neighbors for items that the user has not yet rated; based on rating predictions, recommenda-tions are then formulated. However, these systems are known to exhibit the “gray sheep” problem, i.e. the situation where no near neighbors can be identified for a number of users, and hence no recommendation can be formulated for them. This problem is more intense in sparse datasets, i.e. datasets with relatively small number of ratings, compared to the number of users and items. In this work, we propose a method for alleviating this problem by exploiting user dissimilarity, under the assumption that if some users have exhibited opposing preferences in the past, they are likely to do so in the future. The proposed method has been eval-uated against seven widely used datasets and has been proven to be particularly effective in increasing the percentage of users for which personalized recommendations can be formulated in the context of sparse datasets, while at the same time maintaining or slightly improving rating prediction quality.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
SoDa Technical report TR-19002v2
Note: this version extends and supersedes the first version of the report, which is available here.
Abstract:Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
TR-19002-v02.pdf | 451.64 KB |
SoDa Technical report TR-19002
Note: a newer version of this report is available here.
Abstract:Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
TR-19002.pdf | 235.34 KB |
Abstract:
Query personalization has emerged as a means to handle the issue of information volume growth, aiming to tailor query answer results to match the goals and interests of each user. Query personalization dynamically enhances queries, based on information regarding user preferences or other contextual information; typically enhancements relate to incorporation of conditions that filter out results that are deemed of low value to the user and/or ordering results so that data of high value are presented first. In the domain of personalization, social network information can prove valuable; users’ social networks profiles, including their interests, influence from social friends, etc. can be exploited to personalize queries. In this paper, we present a query personalization algorithm, which employs collaborative filtering techniques and takes into account influence factors between social network users, leading to personalized results that are better-targeted to the user.
Read the article online via ScienceDirect
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
One of the major problems in the domain of social networks is the handling and diffusion of the vast, dynamic and disparate information created by its users. In this context, the information contributed by users can be exploited to generate recommendations for other users. Relevant recommender systems take into account static data from users' profiles, such as location, age or gender, complemented with dynamic aspects stemming from the user behavior and/or social network state such as user preferences, items' general acceptance and influence from social friends. In this paper, we enhance recommendation algorithms used in social networks by taking into account qualitative aspects of the recommended items, such as price and reliability, the influencing factors between social network users, the social network user behavior regarding their purchases in different item categories and the semantic categorization of the products to be recommended. The inclusion of these aspects leads to more accurate recommendations and diffusion of better user-targeted information. This allows for better exploitation of the limited recommendation space, and therefore online advertisement efficiency is raised.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
Recommendation information diffusion in social networks | 684.46 KB |
Users that enter ratings for items follow different rating practices, in the sense that, when rating items, some users are more lenient, while others are stricter. This aspect is taken into account by the most widely used similarity metric in user-user collaborative filtering, namely, the Pearson Correlation, which adjusts each individual user rating by the mean value of the ratings entered by the specific user, when computing similarities. However, a user’s rating practices change over time, i.e. a user could start as strict and subsequently become lenient or vice versa. In that sense, the practice of using a single mean value for adjusting users’ ratings is inadequate, since it fails to follow such shifts in users’ rating practices, leading to decreased rating prediction accuracy. In this work, we address this issue by using the concept of dynamic averages introduced earlier and we extend earlier work by (1) introducing the concept of rating time clusters and (2) presenting a novel algorithm for calculating dynamic user averages and exploiting them in user-user collaborative, filtering implementations. The proposed algorithm incorporates the aforementioned concept and is able to follow more successfully shifts in users’ rating practices. It has been evaluated using numerous datasets, and has been found to introduce significant gains in rating prediction accuracy, while outperforming the dynamic average computation approaches that are presented earlier.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In the information filtering (or publish/ subscribe) paradigm, clients subscribe to a server with continuous queries that express their information needs while information sources publish documents to servers. Whenever a document is published, the continuous queries satisfying this document are found and notifications are sent to appropriate subscribed clients. Although information filtering has been in the research agenda for about half a century, there is a huge paradox when it comes to benchmarking the performance of such systems. There is a striking lack of a benchmarking mechanism (in the form of a large-scale standarised test collection of continuous queries and the relevant document publications) specifically created for evaluating filtering tasks. This work aims at filling this gap by proposing a methodology for automatically creating massive continuous query datasets from available document collections. We intend to publicly release all related material (including the software accompanying the proposed methodology) to the research community after publication.
Abstract:
Extraction-Transformation-Loading (ETL) tools are pieces of software
responsible for the extraction of data from several sources, their cleansing,
customization and insertion into a data warehouse. In this paper, we delve into
the logical design of ETL scenarios and provide a generic and customizable
framework in order to support the DW designer in his task. First, we present a
metamodel particularly customized for the definition of ETL activities. We
follow a workflow-like approach, where the output of a certain activity can
either be stored persistently or passed to a subsequent activity. Also, we
employ a declarative database programming language, LDL, to define the
semantics of each activity. The metamodel is generic enough to capture any
possible ETL activity. Nevertheless, in the pursuit of higher reusability and
flexibility, we specialize the set of our generic metamodel constructs with a
palette of frequently-used ETL activities, which we call templates. Moreover,
in order to achieve a uniform extensibility mechanism for this library of
built-ins, we have to deal with specific language issues. Therefore, we also
discuss the mechanics of template instantiation to concrete activities. The
design concepts that we introduce have been implemented in a tool, ARKTOS II,
which is also presented.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Data visualization is one of the major issues of database research and OLAP,
being a decision support technology, is clearly in the center of this effort.
Still, so far, visualization has not been incorporated in the abstraction
levels of DBMS architecture (conceptual, logical, physical), neither has it
been formally treated in this context. In this paper we start by reconsidering
the separation of the aforementioned abstraction levels to take visualization
into consideration. Then, we present the Cube Presentation Model (CPM), a novel
presentational model for OLAP screens. The proposal lies on the fundamental
idea of separating the logical part of a data cube computation, from the
presentational part of the client tool. Then, CPM can be naturally mapped on
the Table Lens, which is an advanced visualization technique from the
Human-Computer Interaction area, particularly tailored for cross-tab reports.
Based on the particularities of Table Lens, we propose automated proactive
support to the user for the interaction with an OLAP screen. Finally, we
discuss implementation and usage issues in the context of an academic prototype
system (CubeView) that we have implemented.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Data visualization is one of the big issues of database research.
OLAP as a decision support technology is highly related to the
developments of data visualization area. In this paper we
demonstrate how the Cube Presentation Model (CPM), a novel
presentational model for OLAP screens, can be naturally mapped
on the Table Lens, which is an advanced visualization technique
from the Human-Computer Interaction area, particularly tailored
for cross-tab reports. We consider how the user interacts with an
OLAP screen and based on the particularities of Table Lens, we
propose an automated proactive users support. Finally, we discuss
the necessity and the applicability of advanced visualization
techniques in the presence of recent technological developments.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extraction-Transformation-Loading (ETL) and Data Cleaning tools are pieces of
software responsible for the extraction of data from several sources, their
cleaning, customization and insertion into a data warehouse. To deal with the
complexity and efficiency of the transformation and cleaning tasks we have
developed a tool, namely ARKTOS, capable of modeling and executing practical
scenarios, by providing explicit primitives for the capturing of common tasks.
ARKTOS provides three ways to describe such a scenario, including a graphical
point-and-click front end and two declarative languages: XADL (an XML variant),
which is more verbose and easy to read and SADL (an SQL-like language) which
has a quite compact syntax and is, thus, easier for authoring.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extraction-Transformation-Loading (ETL) tools are pieces of software
responsible for the extraction of data from several sources, their cleansing,
customization and insertion into a data warehouse. Literature and personal
experience have guided us to conclude that the problems concerning the ETL
tools are primarily problems of complexity, usability and price. To deal with
these problems we provide a uniform metamodel for ETL processes, covering the
aspects of data warehouse architecture, activity modeling, contingency
treatment and quality management. The ETL tool we have developed, namely
ARKTOS, is capable of modeling and executing practical ETL scenarios by
providing explicit primitives for the capturing of common tasks.
provides three ways to describe an ETL scenario: a graphical point-and-click
front end and two declarative languages: XADL (an XML variant), which is more
verbose and easy to read and SADL (an SQL-like language) which has a quite
compact syntax and is, thus, easier for authoring.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extract-Transform-Load (ETL) workflows are data centric
workflows responsible for transferring, cleaning, and loading data from their
respective sources to the warehouse. Previous research has identified graphbased
techniques that construct the blueprints for the structure of such
workflows. In this paper, we extend existing results by explicitly incorporating
the internal semantics of each activity in the workflow graph. Apart from the
value that blueprints have per se, we exploit our modeling to introduce rigorous
techniques for the measurement of ETL workflows. To this end, we build upon
an existing formal framework for software quality metrics and formally prove
how our quality measures fit within this framework.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extraction-Transformation-Loading (ETL) tools are pieces of
software responsible for the extraction of data from several
sources, their cleansing, customization and insertion into a data
warehouse. In this paper, we focus on the problem of the
definition of ETL activities and provide formal foundations for
their conceptual representation. The proposed conceptual model is
(a) customized for the tracing of inter-attribute relationships and
the respective ETL activities in the early stages of a data
warehouse project; (b) enriched with a 'palette' of a set of
frequently used ETL activities, like the assignment of surrogate
keys, the check for null values, etc; and (c) constructed in a
customizable and extensible manner, so that the designer can
enrich it with his own re-occurring patterns for ETL activities.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extract-Transform-Load (ETL) workflows are data centric workflows
responsible for transferring, cleaning, and loading data from their respective
sources to the warehouse. In this paper, we build upon existing graph-based
modeling techniques that treat ETL workflows as graphs by (a) extending the
activity semantics to incorporate negation, aggregation and self-joins, (b)
complementing querying semantics with insertions, deletions and updates, and (c)
transforming the graph to allow zoom-in/out at multiple levels of abstraction (i.e.,
passing from the detailed description of the graph at the attribute level to more
compact variants involving programs, relations and queries and vice-versa).
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we focus on the logical design of the ETL scenario of a data warehouse. Based on a formal logical model that includes the data stores, activities and their constituent parts, we model an ETL scenario as a graph, which we call the Architecture Graph. We model all the aforementioned entities as nodes and four different kinds of relationships (instance-of, part-of, regulator and provider relationships) as edges. In addition, we provide simple graph transformations that reduce the complexity of the graph. Finally, in order to support the engineering of the design and the evolution of the warehouse, we introduce specific importance metrics, namely dependence and responsibility, to measure the degree to which entities are bound to each other.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Το άÏθÏο αυτό αφοÏά στο λογικό σχεδιασμό ΕΜΦ (Εξαγωγής-ΜετασχηματισμοÏ-ΦόÏτωσης) σεναÏίων για αποθήκες δεδομÎνων. Με βάση Îνα τυπικό λογικό μοντÎλο που αποτελείται από σημεία αποθήκευσης δεδομÎνων, διεÏγασίες και τα συστατικά τους μÎÏη, Îνα ΕΜΦ σενάÏιο μοντελοποιείται ως γÏάφος, που ονομάζεται ΓÏάφος ΑÏχιτεκτονικής. Όλες οι Ï€ÏοαναφεÏθείσες οντότητες αποτελοÏν τους κόμβους του γÏάφου και τα Ï„ÎσσεÏα διαφοÏετικά είδη σχÎσεων που Îχουν Î¼ÎµÏ„Î±Î¾Ï Ï„Î¿Ï…Ï‚ (όπως σχÎσεις στιγμιότυπου, μÎÏους, ÏÏθμισης και παÏοχής) τις ακμÎÏ‚. Με σκοπό να υποστηÏιχτεί ο σχεδιασμός και η εξÎλιξη της ΑΔ, οÏίζονται συγκεκÏιμÎνες μετÏήσεις σπουδαιότητας: η εξάÏτηση και η υπευθυνότητα, για τον υπολογισμό του Î²Î±Î¸Î¼Î¿Ï ÎºÎ±Ï„Î¬ τον οποίο είναι συνδεδεμÎνες Î¼ÎµÏ„Î±Î¾Ï Ï„Î¿Ï…Ï‚ οι οντότητες του σεναÏίου.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In our days knowledge extraction methods are able to
produce artifacts (also called patterns) that concisely rep-
resent data. Patterns are usually quite heterogeneous and
require ad-hoc processing techniques. So far, little empha-
sis has been posed on developing an overall integrated en-
vironment for uniformly representing and querying dif-
ferent types of patterns. Within the larger context of mod-
elling, storing, and querying patterns, in this paper, we:
(a) formally de¯ne the logical foundations for the global
setting of pattern management through a model that cov-
ers data, patterns and their intermediate mappings; (b)
present a pattern speci¯cation language for pattern man-
agement along with safety restrictions; and (c) intro-
duce queries and query operators and identify interest-
ing query classes.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
It is commonly agreed that multidimensional data cubes form the basic logical data model for OLAP applications. Still, there seems to be no agreement on a common model for cubes. In this paper we propose a logical model for cubes based on the key observation that a cube is not a self-existing entity, but rather a view over an underlying data set. We accompany our model with syntactic characterisations for the problem of cube usability. To this end, we have developed algorithms to check whether (a) the marginal conditions of two cubes are appropriate for a rewriting, in the presence of aggregation hierarchies and (b) an implication exists between two selection conditions that involve different levels of aggregation of the same dimension hierarchy. Finally, we present a rewriting algorithm for the cube usability problem.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Η ΣÏγχÏονη Αναλυτική ΕπεξεÏγασία ΔεδομÎνων (On-Line Analytical Processing - OLAP) είναι μια τάση στην τεχνολογία των βάσεων δεδομÎνων, που στηÏίζεται στη θεώÏηση της πληÏοφοÏίας με πολυδιάστατο Ï„Ïόπο στο επίπεδο των πελατών. ΠαÏά την κοινή αποδοχή των πολυδιάστατων κÏβων σαν το κεντÏικό λογικό μοντÎλο για OLAP και την πληθώÏα των εÏευνητικών Ï€Ïοτάσεων, υπάÏχει μικÏή συμφωνία στην εÏÏεση μιας κοινής οÏολογίας και σημασιολογίας για το λογικό μοντÎλο δεδομÎνων. Στο άÏθÏο αυτό Ï€Ïοτείνεται Îνα επιπλÎον λογικό μοντÎλο για κÏβους, με βάση την παÏατήÏηση ότι Îνας κÏβος δεν είναι μια αυθÏπαÏκτη οντότητα, αλλά μια όψη πάνω σε Îνα υποκείμενο σÏνολο δεδομÎνων. Το Ï€Ïοτεινόμενο μοντÎλο είναι αÏκετά ισχυÏÏŒ στο να καλÏπτει όλες τις συνηθισμÎνες Ï€Ïάξεις OLAP όπως επιλογή, συναθÏοιστική άνοδος και αναλυτική κάθοδος σε επίπεδα αδÏομÎÏειας, μÎσω μιας συνεποÏÏ‚ και πλήÏης άλγεβÏας. Δείχνεται επίσης πώς αυτό το μοντÎλο μποÏεί να χÏησιμοποιηθεί σαν η βάση για την επεξεÏγασία λειτουÏγιών στους κÏβους και παÏουσιάζονται συντακτικοί χαÏακτηÏισμοί για τα Ï€Ïοβλήματα της χÏησιμότητας κÏβων (ήτοι, του Ï€Ïοβλήματος χÏησιμοποιήσεως δεδομÎνων από κάποιον κÏβο για να υπολογιστεί Îνας άλλος κÏβος).
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Research has only recently dealt with the above problem and provided few models, tools and techniques to address the issues around the ETL environment [1,2,3,5]. In this paper, we present a logical model for ETL processes. The proposed model is characterized by several templates, representing frequently used ETL activities along with their semantics and their interconnection. In the full version of the paper [4] we present more details on the aforementioned issues and complement them with results on the characterization of the content of the involved data stores after the execution of an ETL scenario and impact-analysis results in the presence of changes.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In this work, we apply the semantic information push para- digm in the domain of cultural heritage and advocate for its usefulness in a number of diverse scenarios ranging from personalised content delivery at museum visitors to the curation of huge knowledge bases that integrate diverse cultural assets. We envision a large-scale semantic information push system that is able to perform efficient filtering of multiple RDF data streams based on expressive subscriptions that aim both for the structure and content of the stream. To this end, we put forward STIP, a new algorithm that indexes user subscriptions and utilises its index structures to efficiently match them against the stream of RDF events; STIP proves four orders of magnitude faster than its baseline competitor in an experimental evaluation with real-world data. To the best of our knowledge, this is the first approach in the literature to propose the usage of information push –along with an appropriate algorithm– as the technological substrate for a variety of high-level cultural heritage applications such as personalisation and recommender systems.
Abstract:
Efficient management and analysis of large volumes of data is a demanding task of increasing scientific and industrial importance, as the ubiquitous generation of information governs more and more aspects of human life. In this article, we introduce FML-kNN, a novel distributed processing framework for Big Data that performs probabilistic classification and regression, implemented in Apache Flink. The framework’s core is consisted of a k-nearest neighbor joins algorithm which, contrary to similar approaches, is executed in a single distributed session and is able to operate on very large volumes of data of variable granularity and dimensionality. We assess FML-kNN’s performance and scalability in a detailed experimental evaluation, in which it is compared to similar methods implemented in Apache Hadoop, Spark, and Flink distributed processing engines. The results indicate an overall superiority of our framework in all the performed comparisons. Further, we apply FML-kNN in two motivating uses cases for water demand management, against real-world domestic water consumption data. In particular, we focus on forecasting water consumption using 1-h smart meter data, and extracting consumer characteristics from water use data in the shower. We further discuss on the obtained results, demonstrating the framework’s potential in useful knowledge extraction.
Abstract:
The amount and significance of time series that are associated with specific locations, such as visitor check-ins at various places
or sensor readings, have increased in many domains over the last years. Although several works exist for time series visualization
and visual analytics in general, there is a lack of efficient techniques for geolocated time series in particular. In this work, we
present an approach that relies on a hybrid spatial-time series index to allow for interactive map-based visual exploration and
summarization of geolocated time series data. In particular, we use the BTSR-tree index, which extends the R-tree by maintaining
bounds for the time series indexed at each node. We describe the structure of this index and show how it can be directly exploited
to produce map-based visualizations of geolocated time series at different zoom levels efficiently. We empirically validate
our approach using two real-world datasets, as well as a synthetic one that is used to test the scalability of our method.
Abstract:
Abstract:
Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCENTRAL; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCENTRAL is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCENTRAL scales to large graphs. We demonstrate with real datasets that the serial implementation of iCENTRAL is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.
Abstract:
We propose a model for the estimation of query execution time in an environment supporting bushy and pipelined parallelism. We consider a parallel architecture of processors having private main memories, accessing a shared secondary storage and communicating to each other via a network. For this environment, we compute the cost of query operators when processed in isolation and when in pipeline mode. We use those formulae to incrementally compute the cost of a query execution plan from its components. Our cost model can be incorporated to any optimizer for parallel query processing that considers parallel and pipelined execution of the query operators.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
cost-model-95.pdf | 259.36 KB |
Abstract
In this study we present a technique for the parallel optimisation of join queries, that uses the offered coarse-grain parallelism of the underlying architecture in order to reduce the CPU-bound optimisation overhead. The optimisation technique performs an almost exhaustive search of the solution space for small join queries and gradually, as the number of joins increases, it diverges towards iterative improvement. This technique has been developed on a low-parallelism transputer-based architecture, where its behaviour is studied for the optimisation of queries with many tenths of joins.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
cai-93.pdf | 248.77 KB |
Abstract
In this study we present a technique for the parallel optimisation of join queries, that uses the offered coarse-grain parallelism of the underlying architecture in order to reduce the CPU-bound optimisation overhead. The optimisation technique performs an almost exhaustive search of the solution space for small join queries and gradually, as the number of joins increases, it diverges towards iterative improvement. This technique has been developed on a low-parallelism transputer-based architecture, where its behaviour is studied for the optimisation of queries with many tenths of joins.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
parle-paper.pdf | 495.03 KB |
This book covers a broad set of aspects that relate to real-time systems. As far as hardware is concerned, issues related to concurrency and processor multithreading are discussed, while hardware issues relevant to the digital capturing of mult-channel or single-channel data are also analysed. As far as software is concerned, the development of real-time systems is studied, with the focus being on the critical stages of system analysis and design; real-time operating systems, which will support the applications are also studied. Hardware and software are also examined as an integral system, through a comprehensive study of biomedical real-time systems. Finally, performance analysis of real-time systems is also extensively covered. (excerpt from the foreword by prof. Athanasios Skodras)
Table of contents (chapters)
Abstract:
In the past years, a number of implementations of temporal DBMSs has been reported. Most of these implementations share a common feature, which is that they have been built as an extension to a snapshot DBMS. In this paper, we present three alternative design approaches that can be used for extending a snapshot DBMS to support temporal data, and evaluate the suitability of each approach, with respect to a number of design objectives.
Note:This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
comparative-study-temporal-architectures.pdf | 254.11 KB |
Abstract:
Clinical trials are processes that produce large volumes of complex data, with inherent temporal requirements, since the state of patients evolves during the trials, and the data acquisition phase itself needs to be monitored. Additionally, since the requirements for all clinical trials have a significant common portion, it is desirable to capture these common requirements in a generalised framework, which will be instantiated for each specific trial by supplementing the trial-specific requirements. In this paper, we present an integral approach to clinical trial management, using a temporal object-oriented methodology to capture and model the requirements, a temporal OODBMS for data storage and a generalised template application, through which trial-specific applications may be generated.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
flexible-framework-for-temporal-clinical-data.pdf | 234.55 KB |
Abstract:
Queries in temporal databases often employ the coalesce operator, either to coalesce results of projections, or data which are not coalesced upon storage. Therefore, the performance and the optimisation schemes utilised for this operator is of major importance for the performance of temporal DBMSs. Insofar, performance studies for various algorithms that implement this operator have been conducted, however, the joint optimisation of the coalesce operator with other algebraic operators that appear in the query execution plan has only received minimal attention. In this paper, we propose a scheme for combining the coalesce operator with selection operators which are applied to the valid time of the tuples produced from a coalescing operation. The proposed scheme aims at reducing the number of tuples that a coalescing operator must process, while at the same time allows the optimiser to exploit temporal indices on the valid time of the data.
Article available through the ACM Author-izer service:
Attachment | Size |
---|---|
optimisation-coalesce-valid-time-selection.pdf | 56.8 KB |
Abstract:
We study the recent proposal of Goyal and Egenhofer who presented a model for
qualitative spatial reasoning about cardinal directions. Our approach is formal
and complements the presentation of Goyal and Egenhofer. We focus our efforts
on the composition operator for two cardinal direction relations. We consider
two interpretations of the composition operator: consistency-based and
existential composition. We point out that the only published method to compute
the consistency-based composition does not always work correctly. Then, we
consider progressively more expressive classes of cardinal direction relations
and give consistency-based composition algorithms for these classes. Our
theoretical framework allows us to prove formally that our algorithms are
correct. When we consider existential composition, we demonstrate that the
binary relation resulting from the composition of two cardinal direction
relations cannot be expressed using the relations defined by Goyal and
Egenhofer. Finally, we discuss some extensions to the basic model and consider
the composition problem for these extensions.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
We study the recent proposal of Goyal and Egenhofer who presented a model for qualitative spatial reasoning about cardinal directions. Our approach is formal and complements the presentation of Goyal and Egenhofer. We focus our eorts on the operation of composition for two cardinal direction relations. We point out that the only published method to compute the composition does not always work correctly. Then we consider progressively more expressive classes of cardinal direction relations and give composition algorithms for these classes. Our theoretical framework allows us to prove formally that our algorithms are correct. Finally, we demonstrate that in some cases, the binary relation resulting from the composition of two cardinal direction relations cannot be expressed using the relations defined by Goyal and Egenhofer
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
We present a formal model for qualitative spatial reasoning with cardinal
directions that is based on a recent proposal in the literature. We use our
formal framework to study the composition operation for the cardinal direction
relations of this model. We consider progressively more expressive classes of
cardinal direction relations and give composition algorithms for these classes.
Finally, when we consider the problem in its generality, we show that the
binary relation resulting from the composition of some cardinal direction
relations cannot even be expressed using the relations which are currently
employed by the related proposal.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Qualitative spatial reasoning forms an important part of the commonsense reasoning required for building intelligent Geographical Information Systems (GIS). Previous research has come up with models to capture cardinal direction relations for typical GIS data. In this paper, we target the problem of efficiently computing the cardinal direction relations between regions that are composed of sets of polygons and present the first two algorithms for this task. The first of the proposed algorithms is purely qualitative and computes, in linear time, the cardinal direction relations between the input regions. The second has a quantitative aspect and computes, also in linear time, the cardinal direction relations with percentages between the input regions. The algorithms have been implemented and embedded in an actual system, CarDirect, that allows the user to annotate regions of interest in an image or a map, compute cardinal direction relations and retrieve combinations of interesting regions on the basis of a query.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Qualitative spatial reasoning forms an important part of the commonsense reasoning required for building intelligent
Geographical Information Systems (GIS). Previous research has come up with models to capture cardinal direction relations for typical GIS data. In this paper, we target the problem of efficiently computing the cardinal direction relations between regions that are composed of sets of polygons and present two algorithms for this task. The first of the proposed algorithms is purely qualitative and computes, in linear time, the cardinal direction relations between the input regions. The second has a quantitative aspect and
computes, also in linear time, the cardinal direction relations with percentages between the input regions. Our experimental evaluation indicates that the proposed algorithms outperform existing methodologies. The algorithms have been implemented and embedded in an actual system, CARDIRECT, that allows the user to 1) specify and annotate regions of interest in an image or a map, 2) compute cardinal direction relations between them, and 3) pose queries in order to retrieve combinations of interesting regions.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
We present a formal model for qualitative spatial reasoning with cardinal
directions and study the problem of checking the consistency of a set of
cardinal direction constraints. We present the first algorithm for this
problem, prove its correctness and analyze its computational complexity.
Utilizing the above algorithm we prove that the consistency checking of a
set of basic cardinal direction constraints can be performed in O(n^5) time
while the consistency checking of an unrestricted set of cardinal direction
constraints is NP-complete. Finally, we briefly discuss some extensions to
the basic model.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
In the past years the management of temporal data has attracted numerous researchers resulting to a large number of temporal data extensions to the relational and object oriented data models. In this paper, the proposed temporal data model focuses on the functional characteristics of the histories. The paper introduces a set oriented description of the calendars together with a function oriented history concept with a history-algebra. The completeness of the proposed model with respect to the reduced temporal algebra TA is also proven. The expressive power of the proposed model is demonstrated at the end of the paper by a hospital example.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
function-oriented-history-in-databases.pdf | 217.09 KB |
Abstract:
Transactions and concurrency control are significant features in database systems, facilitating functions both at user and system level. However, the support of these features in a temporal DBMS has not yet received adequate research attention. In this paper, we describe the techniques developed in order to support transaction and concurrency control in a temporal DBMS which was implemented as an additional layer to a commercial DBMS. The proposed techniques make direct use of the transaction mechanisms of the DBMS. In addition, they overcome a number of limitations such as automatic commit points, lock release and log size increment, which are imposed by the underlying DBMS. Our measurements have shown that the overhead introduced by these techniques is negligible, less than 1% in all cases. The approach undertaken is of general interest, it can also be applied to non-temporal DBMS extensions.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
transaction-and-concurrency-support-in-tdbms.pdf | 264.68 KB |
Abstract:
Application development on top of database systems is heavily based on the existence of embedded and 4GL languages. However, the issue of designing and implementing embedded or 4GL temporal languages has not been addressed insofar. In this paper, we present a design approach for implementing an embedded temporal language that supports valid time. Furthermore, we introduce implementation techniques that can be used for implementing any embedded temporal language that supports valid time on top of a DBMS.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
implementing-embedded-valid-time-languages.pdf | 257.5 KB |
Abstract:
We present a formal model for qualitative spatial reasoning with cardinal
directions utilizing a co-ordinate system. Then, we study the problem of
checking the consistency of a set of cardinal direction constraints. We
introduce the first algorithm for this problem, prove its correctness and
analyze its computational complexity. Using the above algorithm, we prove that
the consistency checking of a set of basic (i.e., non-disjunctive) cardinal
direction constraints can be performed in O(n^5) time. We also show that
the consistency checking of a set of unrestricted (i.e., disjunctive and
non-disjunctive) cardinal direction constraints is NP-complete. Finally, we
briefly discuss an extension to the basic model and outline an algorithm for
the consistency checking problem of this extension.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Temporal and spatial constraint networks do not live alone
in the wilderness.
In many cases they are components of larger systems
e.g., temporal database systems, spatial database systems,
knowledge representation systems, natural language systems, planning
systems, scheduling systems, multimedia systems and so on.
We believe that an interesting new frontier for temporal and spatial
reasoning research is the formalisation, analysis and
possible re-implementation of systems where
temporal or spatial reasoners are an important component.
In this paper we will make a first contribution to this exciting
area of research. We will consider temporal constraint networks
complemented by a database for storing the information typically
used to label network nodes. We will then study the computational
complexity of querying the combined system using a first order
modal query language.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
We start with the assumption that temporal knowledge
usually captured by constraint networks
can be represented and queried more effectively by using
the scheme of indefinite constraint databases proposed by Koubarakis.
Although query evaluation
in this scheme is in general a hard computational problem,
we demonstrate that there are several interesting cases where
query evaluation can be done in PTIME. These tractability
results are original and subsume previous results
by van Beek, Brusoni, Console and Terenziani.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
We start with the assumption that temporal and spatial knowledge
usually captured by constraint networks
can be represented and queried more effectively by using
the scheme of indefinite constraint databases.
Because query evaluation in this scheme is in general a hard
computational problem, we seek tractable instances of query evaluation.
We assume that we have a class of constraints
C with some reasonable computational and closure properties
(the computational properties of interest are that the satisfiability
problem and an appropriate version of the variable elimination problem for
C should be solvable in PTIME).
Under this assumption, we exhibit general classes
of indefinite constraint databases and first-order
modal queries for which query evaluation
can be done with PTIME data complexity.
We then search for tractable instances of C among the subclasses
of Horn disjunctive linear constraints over the rationals. From
previous research we know that the
satisfiability problem for Horn disjunctive linear constraints is
solvable in PTIME, but not the variable elimination problem.
Thus we try to discover subclasses of Horn disjunctive linear
constraints with tractable variable elimination problems.
The class of UTVPI^{\ne} constraints
is the largest class that we show to have this property.
Finally, we restate our general tractability results with C ranging
over the newly discovered tractable classes. Interesting
tractable query answering problems for indefinite temporal and spatial
constraint databases are identified in this way. We close our complexity
analysis by precisely outlining the frontier between tractable and possibly intractable
query answering problems
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Temporal data, i.e. data varying over time dimension whose history of evolutions is maintained, are not used in the industrial world. But, far from managing only non-temporal data, numerous and various applications and industrial sectors such as banking, insurance, disease management in medicine, booking and so on, face the management of temporal data. These applications often use results of own developments, simulating temporal data in a more or less effective ways.
This paper presents the results of the European project TOOBIS - Temporal Object Oriented dataBase within Information System - underlying an application using and managing temporal data, in the domain of Clinical Research. TOOBIS offers an extension of the object-oriented database standard in order to provide a full temporal object-oriented database management system, as well as a temporal methodology of analysis and design.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
toobis-application-donees-temporelles.pdf | 317.11 KB |
Abstract:
In the past years a number of temporal extensions to the different database models have been proposed. Extensions to the relational model have been following the different SQL standards, while no attempts have been made to extend the OO-databases' standard, defined by ODMG. In this paper we present a temporal extension to the ODMG standard, as this has been specified in the TOOBIS project. A Temporal Object Data Model, a Temporal Object Definition Language and a Temporal Object Query Language have been specified and have been proposed as extensions to the ODM, ODL and OQL of ODMG. This extension has been implemented over a commercial OODBMS, reinforcing and validating the effort of standardisation and portability of this extension.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
temporal-extension-odmg.pdf | 247.68 KB |
Abstract:
We consider the scheme of indefinite constraint databases proposed by
Koubarakis. This scheme can be used to represent indefinite information arising
in temporal, spatial and truly spatiotemporal applications. The main technical
problem that we address in this paper is the discovery of tractable classes of
databases and queries in this scheme. We start with the assumption that we have
a class of constraints C with satisfiability and variable elimination
problems that can be solved in PTIME. Under this assumption, we show that there
are several general classes of databases and queries for which query evaluation
can be done with PTIME data complexity. We then search for tractable instances
of C in the area of temporal and spatial constraints. Classes of
constraints with tractable satisfiability problems can be easily found in the
literature. The largest class that we consider is the class of Horn disjunctive
linear constraints over the rationals. Because variable elimination for Horn
disjunctive linear constraints cannot be done in PTIME, we try to discover
subclasses with tractable variable elimination problems. The class of UTVPI^{\ne} constraints is the largest class that we show to have this
property. Finally, we restate the initial general results with C ranging
over the newly discovered tractable classes. Tractable query answering problems
for indefinite temporal and spatial constraint databases are identified in this
way.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Abstract:
Transactions are a significant concept in database systems, facilitating functions both at user and system level. However transaction support in temporal DBMSs has not yet received enough research attention. In this paper, we present techniques for incorporating transaction support in a temporal DBMS, which is implemented as an additional layer to a commercial RDBMS. These techniques overcome certain limitations imposed by the underlying RDBMS, and avoid excessive increment of the log size.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
transaction-support-in-tdbms.pdf | 203.19 KB |
Abstract:
Over the last few years a number of distributed social networks with content management capabilities have been introduced both by academia and industry. However, none of these efforts has so far focused on supporting both information retrieval and filtering functionality in a distributed social net- working environment. In this work we present a social network- ing architecture that offers both functionalities –in addition to the usual social interaction tasks– in distributed social networks, outline the associated distributed protocols, and introduce a novel data source selection mechanism for identifying good data sources. This novel data source selection mechanism is designed to take into account a combination of resource selection, predicted publication behaviour, and content cost to improve the selection of information producers by users. To the best of our knowledge our approach, coined AGORA, is the first work to model the price of content and to study its effect on retrieval efficiency and effectiveness in a distributed social network setting. Finally, our work goes beyond modelling by providing proof-of-concept experiments with real-world corpora and social networking data.
Abstract
Our main objective is the definition of a design model for a hypermedia database, dedicated to accomodating multimedia information and to promote navigation as a means of information processing. We prefered the object-oriented paradigm to the relational one, because it provides generic modelling constructs and supports property inheritance. We observe the hyperbase as a network of items and links, where items contain multimedia information and links represent the relationship among them. So, we define a class hierarchy containing both the information pieces and the interconnections among them as objects in the same level of functionality. In this environment, we support typed and weighted links to enhance the declaration of semantic relationships, and keywords to allow for querying as an alternative tool for information processing.
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Attachment | Size |
---|---|
delta-conf-paper.pdf | 161.15 KB |
Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.