In this paper we integrate several spatial texture tools into a texture-based video coding scheme. We implemented texture techniques and segmentation strategies in order to detect texture regions in video sequences. These textures are analyzed using temporal motion techniques and are labeled as skipped areas that are not encoded. After the decoding process, frame reconstruction is performed by inserting the skipped texture areas into the decoded frames. We are able to show an improvement over previous texture-based implementations in terms of compression efficiency.
The objective of this paper is to show that for every color space there exists an optimum skin detector scheme such that the performance of all these skin detectors schemes is the same. To that end, a theoretical proof is provided and experiments are presented which show that the separability of the skin and no skin classes is independent of the color space chosen
Numerous studies have identified links among culture, user preferences, and Web site usability. Most of these studies were reports of findings from a behavioral perspective in explaining how cultural factors affect processes of Web-related content design and use. Based on the research of Vygotsky and Nisbett, the authors propose a broader model, referred to as “cultural cognition theory,” by which Web design, like other types of information production, is seen as being shaped by cultural cognitive processes that impact the designers’ cognitive style. This study explores issues related to Web designers’ cultural cognitive styles and their impact on user responses. The results of an online experiment that exposed American1 and Chinese users to sites created by both Chinese and American designers indicate that users perform information-seeking tasks faster when using Web content created by designers from their own cultures.
Previously we presented a network-driven Wyner-Ziv video coding method, in which the motion vectors are derived at the decoder and sent back to the encoder through a reliable backward channel. In this paper, we consider the scenario when the backward channel is error resilient. We study the performance of error resilient methods for our codec. A symmetrical Reversible Variable Length Code (RVLC) is used to reduce the bandwidth requirement of the backward channel. A hybrid scheme with selective coding is proposed to improve the codingefficiency when transmission delay occurs. The experimental results show that these error resilient methods can consistently improve the video quality at the decoder.
The article refines the view that the Internet is increasingly incorporated in everyday life, concluding that the new medium has been partially integrated in the “communication infrastructure” of English-speaking Los Angeles neighborhoods. Here, Internet connectedness is associated with civic participation and indirectly contributes to “belonging” to a residential community. However, in predominantly Asian and Latino areas, the Internet is disengaged from communication environments that lead to belonging, being associated with mainstream media. In these communities its contribution is contradictory; although it probably contributes to the process of ethnic assimilation, it might also lead to disengagement of most educated and technologically savvy residents from their neighborhoods. A possible “magnifying glass effect” is proposed as explanation for the differential integration of new media in community life.
When looked at as a communication task, the watermarking process can be split into three main steps: watermark generation and embedding (information transmission), possible attacks (transmission through the channel), and watermark retrieval (information decoding at the receiver side). We review the main issues in watermark generation and embedding. By focusing on the case of image watermarking, we first discuss the choice of the image features the watermark is superimposed to. Then we consider watermark generation and the rule used to insert the watermark within the host features. By adopting again a communication perspective, some useful hints are given on the way the watermark should be shaped and inserted within the host document for increased robustness against attacks. Given that invisibility is one of the main requirements a watermark must satisfy, the way psycho-visual notions can be used to effectively hide the watermark within an image is carefully reviewed. Rather than insisting on the mathematical aspects of each of the above issues, the main rationale behind the most commonly adopted approaches is given, as well as some illustrative examples
The paper analyzes the 48 contiguous states of the Union and their ability to create and maintain online communities (Yahoo! groups). Multiple regression analysis indicates that the number of online groups and overall amount of online activity increase with amount of social capital. Also, ethnic homogeneity positively influences the number of online groups, while population density and number of IT workers are positively associated with level of online activity. in broad terms, the analyses support the idea that the Internet strengthens offline interaction, sociability online building on sociability offline.
Virtual communities are discussed as expressions of the modern tension between individuality and community, emphasizing the role that counterculture and its values played in shaping the virtual community project. This article analyzes postings to the WELL conferences and the online groups that served as incubators and testing ground for the term “virtual community,” revealing how this concept was culturally shaped by the countercultural ideals of WELL users and how the tension between individualism and communitarian ideals was dealt with. The overarching conclusion is that virtual communities act both as solvent and glue in modern society, being similar to the “small group” movement.
In ATM networks cell loss or channel errors can cause data to be dropped in the channel. When digital images/video are transmitted over these networks one must be able to reconstruct the missing data so that the impact of the errors is minimized. We overview the problem of using EZW encoders in channels where data-loss is possible. We also describe an error resilience scheme based on unequal error protection and data interleaving that addresses the problem of using rate scalable encoders over ATM networks
Protection of intellectual property is a critical issue in digital multimedia distribution systems. Cryptographic tools are commonly used for secure delivery of content and access keys to consumers via terrestrial, satellite, cable and Internet transmissions. A third requirement is the distribution of the copyright or usage rights associated with the digital content. The integrity, as opposed to security, of this data is essential to prevent unauthorized modification. Two approaches have been proposed in the open literature: allocating special fields in the transport stream and embedding a watermark into multimedia content. We present two new methods, based on secret sharing, to create channels with guaranteed data integrity.
In this paper, ultrasound breast image segmentation is improved by using the volumetric data available in neighboring slices. The new algorithm extends the EM/MPM framework to 3D by including pixels from neighboring frames in the Markov Random Field (MRF) clique. In addition, this paper describes a unique linear cost factor introduced in the optimization loop to compensate for the attenuation common to ultrasound images.
Unlike conventional layered scalable video coding, leaky prediction layered video coding (LPLC) introduces a leaky factor /spl alpha/, which takes on values in the range between 0 and 1, to partially include the enhancement layer in the motion compensation loop, hence obtaining a trade-off between coding efficiency and error resilience performance. In this paper, we use quantization noise modeling to theoretically analyze the rate distortion performance of LPLC. An alternative block diagram of LPLC is first developed, which significantly simplifies the theoretical analysis. Closed form expressions, as a function of the leaky factor, are derived for two scenarios, where drift error occurs in the enhancement layer and no drift occurs within the motion compensation loop. Theoretical results are evaluated with respect to the leaky factor, showing that a leaky factor of 0.4-0.6 is a good choice in terms of the overall rate distortion performance of LPLC.
The globalization of telecommunicative ties between nations is studied from a heterogenization perspective. A theoretical model inspired by Appadurai’s “disjuncture hypothesis,†which stipulates that global flows of communication are multidimensional and reinforce regional/local identities, is tested empirically on an international voice traffic dataset. Spatial-statistical measures (global and local versions of Moran’s I) indicate that countries that share the same linguistic (English, Spanish, or French) or civilizational (Catholic, Protestant, and Buddhist–Hindu) background are more likely to be each other’s “telecommunicative neighbors†and that this tendency has increased over time (1989–1999).
Generally speaking, rate scalable video systems today are evaluated operationally, meaning that the algorithm is implemented and the rate-distortion performance is evaluated for an example set of inputs. However, in these cases it is difficult to separate the artifacts caused by the compression algorithm and data set with general trends associated with scalability. In this paper, we derive and evaluate theoretical rate-distortion performance bounds for both layered and continuously rate scalable video compression algorithms which use a single motion-compensated prediction (MCP) loop. These bounds are derived using rate-distortion theory based on an optimum mean-square error (MSE) quantizer, and are thus applicable to all methods of intraframe encoding which use MSE as a distortion measure. By specifying translatory motion and using an approximation of the predicted error frame power spectral density, it is possible to derive parametric versions of the rate-distortion functions which are based solely on the input power spectral density and the accuracy of the motion-compensated prediction. The theory is applicable to systems which allow prediction drift, such as the data-partitioning and SNR-scalability schemes in MPEG-2, as well as those with zero prediction drift such as fine granularity scalability MPEG-4. For systems which allow prediction drift we show that optimum motion compensation is a sufficient condition for stability of the decoding system.
Real-time multimedia applications over the Internet have posed a lot of challenges due to the lack of quality of service (QoS) guarantees, frequent fluctuations in channel bandwidth, and packet losses. To address these issues, a great deal of research has been done in both video coding and video transmission fields. In this paper we present a logarithm-based TCP-friendly rate control (L-TFRC) mechanism, which can estimate the available bandwidth more accurately and improve the smoothness of the multimedia streaming significantly. We also apply it to a progressive fine granularity scalable (PFGS)-based video streaming. Both simulations and experiments over the Internet confirm the performance of L-TFRC.