In this paper, we develop a framework for efficiently encoding predictive error frames (PEF) as part of a rate scalable, wavelet-based video compression algorithm. We investigate the use of rate-distortion analysis to determine the significance of coefficients in the wavelet decomposition. Based on this analysis, we allocate the bit budget assigned to a PEF to the coefficients that yield the largest reduction in distortion, while maintaining the embedded and rate scalable properties of our video compression algorithm.
In this paper we explore a wavelet compression scheme for color images that uses binary vector morphology to aid in the encoding of the locations of the wavelet coefficients. This is accomplished by predicting the significance of coefficients in the sub-bands. This approach fully exploits the correlation between color components and the correlation between and within subbands of the wavelet coefficients. This compression scheme produces images that are comparable in quality to those of color zerotree tree encoders at the same data rate but is computationally less complex.
Rate scalable video compression is appealing for low bit rate applications, such as video telephony and wireless communication, where bandwidth available to an application cannot be guaranteed. In this paper, we investigate a set of strategies to increase the performance of SAMCoW, a rate scalable encoder [1, 2]. These techniques are based on based on wavelet decomposition, spatial orientation trees, and motion compensation.
The characteristics of \non natural” images, such as predictive error frames used in video compression, present a challenge for traditional compression techniques. Particularly difficult are small images, such as QCIF, where compression artifacts at low data rates are more noticeable. In this paper, we investigate techniques to improve the performance of a wavelet-based, rate scalable video codec at low data rates. These techniques include preprocessing and postprocessing stages to enhance the quality and reduce the compression artifacts of decoded images.
In this talk we will describe embedded image and video compression techniques. We describe an embedded zero tree-like approach that exploits the interdependency between color components that is known as Color Embedded Zero Tree Wavelet (CEZW). We will also present a video compression technique, Scalable Adaptive Motion Compensated Wavelet (SAMCoW) compression, that uses the CEZW data structure described above. We show that in addition to providing a wide range of rate scalability, SAMCoW achieves comparable performance to the more traditional hybrid video coders.
Biological, chemical, and radiological agents can tamper with the activities of medical care providers, patient samples, and medicine administration. This results in a shut down of all medical care, leaving patients at a major risk. The technical challenge is to develop sensors to detect and monitor any violations in the medical care environment before threat to life occurs. Wireless devices must communicate multimedia data such as patient information, laboratory results, prescriptions, and X-ray and EKG reports. The reliability, security, and accuracy of these sensors and wireless devices can affect the timeliness access to information for patient monitoring. In addition, data can be corrupted, computer information systems can fail, and communication networks may experience denial of service attacks leading to complete failure of proper patient care. In this paper, we discuss security and safety issues in medical environment, the technology, types, and characteristics of sensors, and research issues in smart antennas, denial of service, fault tolerant authentication, privacy issues, and energy considerations. A discussion of sensors in patient rooms, clinics/wards, hospitals, and measurements of safety and security is presented. The available devices for sensor and wireless communication are also briefly included.
this paper, we describe a community effort to identify the common body of knowledge (CBK) for computer security curricula. Academicians and practitioners have been engaged in targeted workshops for the past two years, producing the results given here. The long-term objective for the project is to develop a curriculum framework for undergraduate and graduate programs in Information Assurance (IA). The framework includes: identification of broad areas of knowledge considered important for…
The mobile computing paradigm has emerged due to advances in wireless or cellular networking technology. This rapidly expanding technology poses many challenging research problems in the area of mobile database systems. Mobile users can access information independent of their physical location through wireless connections. However, accessing and manipulating information without restricting users to specific locations complicates data processing activities. There are computing constraints that make mobile database processing different from the wired distributed database computing. In this chapter, we survey the fundamental research challenges particular to mobile database computing, review some of the proposed solutions and identify some of the upcoming research challenges. We discuss interesting research areas, which include mobile location data management, transaction processing and broadcast, cache management and replication. We highlight new upcoming research directions in mobile digital library, mobile data warehousing, mobile workflow and mobile web and e-commerce.
Data management for distributed computing has spawned a variety of research work and commercial products. At the same time, recent technical advances in the development of portable computing devices and the rapidly expanding cordless technologies have made the mobile computing a reality. In conjunction with the existing computing infrastructure, data management for mobile computing gives rise to significant challenges and performance opportunities. Most mobile technologies physically support broadcast to all mobile users inside a cell. In mobile client-server models, a server can take advantage of this characteristics to broadcast information to all mobile clients in its cell. This fact introduces new mechanisms of data management which are different from the traditional algorithms proposed for distributed database systems. In this chapter, we give executive summary and discuss topics such as data dissemination techniques, transaction models and caching strategies that utilize broadcasting medium for data management. There is a wide range of options for the design of model and algorithms for mobile client-server database systems. We present taxonomies that categorize algorithms proposed under each topic. Those taxonomies provide insights into the tradeoffs inherent in each field of data management in mobile computing environments.
We design and implement a linear hash algorithm in nested transaction environment to handle large amount of data with increased concurrency. Nested transactions allow parallel execution of transactions, and handle transaction aborts, thus provides more concurrency and efficient recovery. We use object-oriented methodology in the implementation which helped in designing the programming components independently. In our model, buckets are modeled as objects and linear hash operations are modeled as methods. The papers contribution is novel in the sense that the system, to our knowledge, is the first to implement linear hashing in a nested transactions environment. We have build a system simulator to analyze the performance. A subtle benefit of the simulator is that it works as the real system with only minor changes.
Mobile computing paradigm has emerged due to advances in wireless or cellular networking technology. This rapidly expanding technology poses many challenging research problems in the area of mobile database systems. The mobile users can access information independent of their physical location through wireless connections. However, accessing and manipulating information without restricting users to specific locations complicates data processing activities. There are computing constraints that make mobile database processing different from the wired distributed database computing. In this paper, we survey the fundamental research challenges particular to mobile database computing, review some of the proposed solutions and identify some of the upcoming research challenges. We discuss interesting research areas, which include mobile location data management, transaction processing and broadcast, cache management and replication and query processing. We highlight new upcoming research directions in mobile digital library, mobile data warehousing, mobile workflow and mobile web and e-commerce.
A heterogeneous distributed database environment integrates a set of autonomous database systems to provide global database functions. A flexible transaction approach has been proposed for the heterogeneous distributed database environments. In such an environment, flexible transactions can increase the failure resilience of global transactions by allowing alternate (but in some sense equivalent) executions to be attempted when a local database system fails or some subtransactions of the global transaction abort. In this paper, we study the impact of compensation, retry, and switching to alternative executions on global concurrency control for the execution of flexible transactions. We propose a new concurrency control criterion for the execution of flexible and local transactions, termed F-serializability, in the error-prone heterogeneous distributed database environments. We then present a scheduling protocol that ensures F-serializability on global schedules. We also demonstrate that this scheduler avoids unnecessary aborts and compensation.
In this paper, we present an open and safe nested transaction model. We discuss the concurrency control and recovery algorithms for our model. Our nested transaction model uses the notion of a recovery point subtransaction in the nested transaction tree. It incorporates a prewrite operation before each write operation to increase the potential concurrency. Our transaction model is termed “open and safe†as prewrites allow early reads (before writes are performed on disk) without cascading aborts. The systems restart and buffer management operations are also modeled as nested transactions to exploit possible concurrency during restart. The concurrency control algorithm proposed for database operations is also used to control concurrent recovery operations. We have given a snapshot of complete transaction processing, data structures involved and, building the restart state in case of crash recovery.