Telecare and Telemedicine services are a technology-based replacement for in-homecare services provided primarily to the elderly and consumers recovering from certain ailments. While these services are mostly successful in the pilot stages they tend to fail in real life settings. One major reason for this failure may be attributed to security issues associated with these services. This research attempts to identify the various Telecare/Telemedicine-related areas whose security issues need to be addressed. The research looks at the work conducted in the field and the issues still to be addressed.
Multiple granularities are essential to extract significant knowledge from spatiotemporal datasets at different levels of detail. They enable to zoom-in and zoom-out spatio-temporal datasets, thus enhancing the data modelling flexibility and improving the analysis of information. In this paper we discuss effective solutions to implementation issues arising when a data model and a query language are enriched with spatio-temporal multigranularity. We propose appropriate representations for space and time dimensions, granularities, granules, and multi-granular values. In particular the design of granularities and their relationships is illustrated with respect to the application of multigranular conversions for data access. Finally, we describe how multigranular spatio-temporal conversions affect data usability and how such important property may be guaranteed. In our discussion, we refer to an existing multigranular spatio-temporal model, whose design was previously proposed as extension of the ODMG data model.
In applications involving spatio-temporal modelling, granularities of data may have to adapt according to the evolving semantics and significance of data. To address such a problem, in this paper we define ST2_ODMGe, a multigranular spatio-temporal model supporting evolutions, which encompass the dynamic adaptation of attribute granularities, and the deletion of attribute values. Evolutions are specified as Event - Condition - Action rules and are performed at run-time. The event, the condition and the action may refer to a period of time and a geographical area. Periodic evolutions may be specified, referring to both transaction and valid time dimensions. The evolution may also be constrained by the attribute values. Evolutions greatly enhance exibility in multigranular spatio-temporal data handling but require revisiting the notion of object consistency with respect to class definitions and access to multigranular object values.
Temporal granularities are the unit of measure for temporal data, thus a multigranular temporal object model allows to store temporal data at different levels of detail, according to the needs of the application domain. In this paper we investigate how the integration of multiple temporal granularities in an object-oriented data model impacts on the inheritance hierarchy. In the paper we specifically address issues related to attribute refinement, and the consequences on object substitutability. This entails the development of suitable instruments for converting temporal values from a granularity to another.
In this paper, we present an approach for detection of spam calls over IP telephony called SPIT in Voice-over-IP (VoIP) systems. SPIT detection is different from spam detection in email in that the process has to be soft real-time, fewer features are available for examination due to the difficulty of mining voice traffic at runtime, and similarity in signaling traffic between legitimate and malicious callers. Our approach differs from existing work in its adaptability to new environments without the need for laborious and error-prone manual parameter configuration. We use clustering based on the call parameters leveraging optional user feedback for some calls, which they mark as SPIT or non-SPIT. We improve on a popular algorithm for semi-supervised learning, called MPCK-Means, to make it scalable to a large number of calls. Our evaluation on captured call traces shows a fifteen fold reduction in computation time, with improvement in detection accuracy.
Large-scale network simulation has grown in importance due to a rapid increase in Internet size and the availability of Internet measurement topologies with applications to computer networks and network security. A key obstacle to large-scale network simulation over PC clusters is the memory balancing problem, where a memory-overloaded machine can slow down a distributed simulation due to disk I/O overhead. Network partitioning methods for parallel and distributed simulation are insufficiently equipped to handle new challenges brought on by memory balancing due to their focus on CPU and communication balancing.
This dissertation studies memory balancing for large-scale network simulation in power-law networks over PC clusters. First, we design and implement a measurement subsystem for dynamically tracking memory consumption in DaSSFNet, a distributed network simulator. Accurate monitoring of memory consumption is difficult due to complex protocol interaction through which message related events are created and destroyed inside and outside a simulation kernel. Second, we achieve efficient memory cost monitoring by tackling the problem of estimating peak memory consumption of a group of simulated network nodes in power-law topologies during network partitioning. In contrast to CPU balancing where the processing cost of a group of nodes is proportional to their sum, in memory balancing this closure property need not hold. Power-law connectivity injects additional complications due to skews in resource consumption across network nodes. Third, we show that the maximum memory cost metric outperforms the total cost metric for memory balancing under multilevel recursive partitioning but the opposite holds for CPU balancing. We show that the trade-off can be overcome through joint memory-CPU balancing—-in general not feasible due to constraint conflicts—-which is enabled by network simulation having a tendency to induce correlation between memory and CPU costs. Fourth, we evaluate memory balancing in the presence of virtual memory (VM) management which admits larger problem instances to be run over limited physical memory. VM introduces complex memory management dependencies that make understanding and evaluating simulation performance difficult. We provide a performance evaluation framework wherein the impact of memory thrashing in distributed network simulation is incorporated which admits quantitative performance comparison and diagnosis. Fifth, we show that improved memory balancing under the maximum cost metric in the presence of VM manifests as faster completion time compared to the total cost metric despite the CPU balancing advantage of the latter. In the cases where the CPU balancing advantage of the total cost metric is strong, we show that joint memory-CPU balancing can achieve the best of both worlds.
We carry out performance evaluation using benchmark applications with varying traffic characteristics: BGP routing, worm propagation under local and global scanning, and distributed client/server system. We use a testbed of 32 Intel x86 machines running a measurement-enhanced DaSSFNet over Linux.