The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Error concealment in MPEG video streams over ATM networks

P Salama, NB Shroff, EJ Delp
Download: PDF

When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly a problem with data loss or erasures. In this paper we describe techniques to address this problem in the context of asynchronous transfer mode (ATM) networks. Our techniques can be extended to other types of data networks such as wireless networks. In ATM networks channel errors or congestion cause data to be dropped, which results in the loss of entire macroblocks when MPEG video is transmitted. In order to reconstruct the missing data, the location of these macroblocks must be known. We describe a technique for packing ATM cells with compressed data, whereby the location of missing macroblocks in the encoded video stream can be found. This technique also permits the proper decoding of correctly received macroblocks, and thus prevents the loss of ATM cells from affecting the decoding process. The packing strategy can also be used for wireless or other types of data networks. We also describe spatial and temporal techniques for the recovery of lost macroblocks. In particular, we develop several optimal estimation techniques for the reconstruction of missing macroblocks that contain both spatial and temporal information using a Markov random field model. We further describe a sub-optimal estimation technique that can be implemented in real time

Added 2008-04-07

Advances in digital video content protection

E Lin, A Eskicioglu, R Lagendijk, E Delp
Added 2008-04-07

Block artifact reduction using a transform-domain Markov random field model

Z Li, EJ Delp

The block-based discrete cosine transform (BDCT) is often used in image and video coding. It may introduce block artifacts at low data rates that manifest themselves as an annoying discontinuity between adjacent blocks. In this paper, we address this problem by investigating a transform-domain Markov random field (TD-MRF) model. Based on this model, two block artifact reduction postprocessing methods are presented. The first method, referred to as TD-MRF, provides an efficient progressive transform-domain solution. Our experimental results show that TD-MRF can reduce up to 90% of the computational complexity compared with spatial-domain MRF (SD-MRF) methods while still achieving comparable visual quality improvements. We then discuss a hybrid framework, referred to as TSD-MRF, that exploits the advantages of both TD-MRF and SD-MRF. The experimental results confirm that TSD-MRF can improve visual quality both objectively and subjectively over SD-MRF methods.

Added 2008-04-07

An enhancement of leaky prediction layered video coding

Y Liu, P Salama, Z Li, EJ Delp
Download: PDF

In this paper, we focus on leaky prediction layered video coding (LPLC). LPLC includes a scaled version of the enhancement layer within the motion compensation (MC) loop to improve the coding efficiency while maintaining graceful recovery in the presence of error drift. However, there exists a deficiency inherent in the LPLC structure, namely that the reconstructed video quality from both the enhancement layer and the base layer cannot be guaranteed to be always superior to that of using the base layer alone, even when no drift occurs. In this paper, we: 1) highlight this deficiency using a formulation that describes LPL; 2) propose a general framework that applies to both LPLC and a multiple description coding scheme using MC and we use this framework to further confirm the existence of the deficiency in LPLC; and 3) furthermore, we propose an enhanced LPLC based on maximum-likelihood estimation to address the previously specified deficiency in LPLC. We then show how our new method performs compared to LPLC.

Added 2008-04-07

Ontology in information security: a useful theoretical foundation and methodological tool

V Raskin, CF Hempelmann, KE Triezenberg, S Nirenburg

The paper introduces and advocates an ontological semantic approach to information security. Both the approach and its resources, the ontology and lexicons, are borrowed from the field of natural language processing and adjusted to the needs of the new domain. The approach pursues the ultimate dual goals of inclusion of natural language data sources as an integral part of the overall data sources in information security applications, and formal specification of the information security community know-how for the support of routine and time-efficient measures to prevent and counteract computer attacks. As the first order of the day, the approach is seen by the information security community as a powerful means to organize and unify the terminology and nomenclature of the field.

Added 2008-04-07

The user non-acceptance paradigm: INFOSEC's dirty little secret

SJ Greenwald, KG Olthoff, V Raskin, W Ruch

This panel will address users’ perceptions and misperceptions of the risk/benefit and benefit/nuisance ratios associated with information security products, and will grope for a solution, based on the psychology of personality trait-factoring results, among other multidisciplinary approaches, to the problem of user non-acceptance of information security products. This problem has acquired a much more scientific guise when amalgamated with the psychology of personality and reinforced by reflections from the field on patterns of user behavior. A gross simplification of the main thrust of the panel is this thesis: if we start profiling the defenders rather than the offenders and do it on the basis of real science rather than very crude personality tests, then we will, at the very least, understand what is happening and possibly create a desirable profile for sysadmins, CIOs, and perhaps even CFOs. This swept-under-the-rug problem is information security’s “dirty little secret.” No other forum is designed to address this, and it may well become yet another major conceptual and paradigmatic shift in the field, of the type initiated in the NSPWs over the last decade. We know that the panel will generate an assured considerable interest among the participants.

Added 2008-04-07

Ontological semantics, formal ontology, and ambiguity

Sergei Nirenburg, Victor Raskin

Ontological semantics is a theory of meaning in natural language and an approach to natural language processing (NLP) which uses an ontology as the central resource for extracting and representing meaning of natural language texts, reasoning about knowledge derived from texts as well as generating natural language texts based on representations of their meaning. Ontological semantics directly supports such applications as machine translation of natural languages, information extraction, text summarization, question answering, advice giving, collaborative work of networks of human and software agents, etc. Ontological semantics pays serious attention to its theoretical foundations by explicating its premises; therefore, formal ontology and its relations with ontological semantics are important. Besides a general brief discussion of these relations, the paper focuses on the important theoretical and practical issue of the distinction between ontology and natural language. It is argued that this crucial distinction lies not in the (inaccurately) presumed nonambiguity of the one and the well-established ambiguity of the other but rather in the constructed and overtly defined nature of ontological concepts and labels on which no human background knowledge can operate unintentionally to introduce ambiguity, as opposed to pervasive uncontrolled and uncontrollable ambiguity in natural language. The emphasis on this distinction, we argue, will provide better theoretical support for the central tenets of formal ontology by freeing it from the Wittgensteinian and Rortyan retreats from the analytical paradigm; it also reinforces the methodology of NLP by maintaining a productive demarcation between the language-independent nature of ontology and language-specific nature of the lexicons, a demarcation that has paid off well in consecutive implementations of ontological semantics and their applications in practical computer systems.

Added 2008-04-07

The genesis of a script for Bankruptcy in ontological semantics

V Raskin, S Nirenburg, CF Hempelmann, I Nirenburg, KE Triezenberg

This paper describes the creation of a script in the framework of ontological semantics as the formal representation of the complex event BANKRUPTCY. This script for BANKRUPTCY serves as the exemplary basis for a discussion of the general motivations for including scripts in NLP, as well as the discovery process for, and format of, scripts for the purposes of processing coreference and inferencing which are required, for example, in high-end Q&A and IE applications.

Added 2008-04-07

Developing Engineering Ontology for Information Retrieval

Z Li, V Raskin, K Ramani
Download: PDF

When engineering content is created and applied during the product life cycle, it is often stored and forgotten. Since search remains word based, engineers do not have the effective means to harness and reuse past designs and experiences. Current information retrieval approaches based on statistical methods and keyword matching do not satisfy users’ needs in the engineering domain. Therefore, we propose a new computational framework that includes an ontological basis and algorithms to retrieve unstructured engineering documents while handling complex queries. The results from the preliminary test demonstrate that our method outperforms the traditional keyword-based search with respect to the standard information retrieval measurement.

Added 2008-04-07

Laboratory modules for conducting comparative analysis of 802.11 frames

RA Malik, RA Hansen, JE Goldman, AH Smith

As wireless networking in the enterprise has gained popularity within recent years, the demand for technical talent has increased in direct proportion to that demand. This has occurred partially due to the complexity of troubleshooting and security issues. Professional wireless networking certification programs have also become popular as a result of the financial incentives associated with this demand. Since the content taught in these professional certifications is an appropriate reflection of the challenges faced in the real world as reported by Fortune magazine and the ChannelWeb network [11], it makes sense to align the content of undergraduate wireless networking courses with that of these certifications.

University professors have often taken the approach of teaching 802.11 wireless networks starting from the signal processing layer and immediately transitioning to the higher layers. This process bypasses the Media Access Control (MAC) layer in consequence. Understanding the MAC layer is of utmost importance for understanding wireless network security because it contains the management frames that control both authentication and encryption.

In this paper, course modules were created for undergraduates that focus on the 802.11 and 802.3 MAC layers and can be used to facilitate teaching troubleshooting and security concepts for wireless networking with the help of packet sniffers. These modules provide students with the hands-on experience of what is generally illustrated in only text for Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA) and Virtual Private Networking (VPN) as well as troubleshooting skills.

Added 2008-04-07

Integrating bionformatics, clinical informatics, and information technology in support of interdisciplinary curriculum development

MD Kane, JL Brewer, JE Goldman, K Moidu

Recent events in both the health care and research community have increased the opportunities available for information technology and systems integration professionals. In health care, mandatory performance specifications in electronic health care records set forth by the United States federal government have placed essentially all aspects of information technology center stage within healthcare. Similarly in scientific research, the completion of the human genome in 2001 has forced researchers to become dependent upon the capabilities of information sciences and technology to convert genomic data into new knowledge regarding human disease, diagnostics and drug discovery. This manuscript describes our next step in the development of a fully integrated Biomedical Informatics curriculum within the realm of information technology, describing three distinct courses developed for computer and information technology students.

Added 2008-04-07

Metrics Based Security Assessment

JE Goldman, V Christie
Added 2008-04-07

Is streaming media becoming mainstream?

D Bulterman, E Delp, A Eleftheriadis, P Fernicola, R Lanphier, S Tan, S Srinivasan, D Ponceleon
Added 2008-04-03

An overview of security issues in streaming video

ET Lin, GW Cook, EJ Delp, P Salama

In this paper we describe some of the security issues in streaming video over the Internet. If high quality video sequences will be delivered to computers and digital television systems over the Internet in our “digital future” this material must be protected.

Added 2008-04-03

An unsupervised color image segmentation algorithm for facedetection applications

A Albiol, L Torres, EJ Delp
Download: PDF

This paper presents an unsupervised color segmentation technique to divide skin detected pixels into a set of homogeneous regions which can be used in face detection applications or any other application which may require color segmentation. The algorithm is carried out in a two stage processing, where the chrominance and luminance information are used consecutively. For each stage a novel algorithm which combines pixel and region based color segmentation techniques is used. The algorithm has proven to be effective under a large number of test images

Added 2008-04-03