This project examines the nature of privacy in communicative, social, and technological contexts, and is based on the premise that the ubiquity of technology and networked environments has changed multiple aspects of communicative processes. These include the realms of social interaction, the medium by which humans communicate, the means by which information about people is collected and stored, and the visibility of individuals in social worlds. These changes call into question the conceptualizations and boundaries of what constitutes privacy in today’s world. Craig’s (1999) metatheoretical traditions of communication theory are used to reconceptualize privacy, yielding a framework with seven distinct lenses: privacy as identity; privacy as relational(ity); privacy as social; privacy as cultural; privacy as autonomy; privacy as mediated; and privacy as discursive. These multi-theoretical approaches provide a structure or logic for organizing new understandings for privacy theory in the 21st century. Following the development of this framework, two empirical studies were completed: (a) a discursive examination of the meaning of privacy for young millennial adults; and (b) a social network analysis of the social structure of privacy. The results provide some intriguing insights.
First, the analysis of the social structure of privacy provides preliminary evidence that privacy does not influence the size, density, or positioning of individuals within a network, although it does reveal unique signatures that suggest privacy may influence the clustering and compartmentalization of groups within networks. Second, a survey of privacy attitudes confirms young adults have strong concerns about privacy, and especially with identity theft, electronic fraud, and controlling access to their identities and information. Finally, an inductive thematic analysis of discourse provides confirmation that privacy is meaningful to young adults, and is defined and articulated in multiple and sometimes dialectic ways, between: (a) identity and relational; (b) states of autonomy (being in control) and mediation (being surveilled by others); and in (c) material and discursive ways. Cross-cutting these dialectics are themes of cultural and contextual elements of privacy.
The contributions of this project are both theoretical and methodological. This is one of the first empirical studies to examine the discursive meaning of privacy to young college-age adults within the context of sociotechnical realms and the first empirical study of privacy from a social networks standpoint. Theoretically, I provide new conceptual frameworks for theorizing about privacy in communication contexts, and expand understanding about how privacy is enacted in social contexts through the methodological use of social network analysis. These understandings contribute to both social theory and communication theory. Finally, new methodological approaches are introduced for the access and processing of large-scale network data from social network sites that may be of interest to scholars.
This research investigates the development and testing of the Human- Biometric Sensor Interaction Evaluation Method that used ergonomics, usability, and image quality criteria as explanatory variables of overall biometric system performance to evaluate swipe-based fingerprint recognition devices. The HBSI method was proposed because of questions regarding the thoroughness of traditional testing and performance evaluation metrics such as FTA, FTE, FAR, and FRR used in standardized evaluation methods; questioning if traditional metrics were acceptable enough to fully test and understand biometric systems, or determine if important data were not being collected.
The Design and Evaluation of the Human-Biometric Sensor Interaction Method had four objectives: (a) analyze the literature to determine what influences the interaction of humans and biometric devices, (b) develop a conceptual model based on previous research, (c) design two alternative swipe fingerprint sensors, and (d) to compare how people interact with the commercial and designed swipe fingerprint sensors, to examine if changing the form factor improves the usability of the device in terms of the proposed HBSI evaluation method.
Data was collected from 85 individuals over 3 visits that accounted for 33,394 interactions with the 4 sensors used. The HBSI Evaluation Method provided additional detail about how users interact with the devices collecting data on: image quality, number of detected minutiae, fingerprint image size, fingerprint image contrast, user satisfaction, task time, task completeness, user effort, number of assists; in addition to traditional biometric testing and reporting metrics of: acquisition failures (FTA), enrollment failures (FTE), and matching performance (FAR and FRR).
Results from the HBSI Evaluation Method revealed that traditional biometric evaluations that focus on system-reported metrics are not providing sufficient reporting details. For example, matching performance for right and left index finger reported a FRR under 1% for all sensors at the operational point 0.1% FAR: UPEK (0.24%), PUSH (0.98%), PULL (0.36%), and large area (0.34%). However, the FTA rate was 11.28% and accounted for 3,768 presentations. From this research, two metrics previously unaccounted for and contained in the traditional FTA rate: Failure to Present (FTP) and False Failure to Present (FFTP) were created to better understand human interaction with biometric sensors and attribute errors accordingly. The FTP rate accounted for 1,187 of the 3,768 (31.5%) of interactions traditionally labeled as FTAs. The FFTP was much smaller at 0.35%, but can provide researchers further insight to help explain abnormal behaviors in matching rates, ROC and DET curves. In addition, traditional metrics of image quality and number of detected minutiae did not reveal a statistical difference across the sensors, however HBSI metrics of fingerprint image size and contrast did reveal a statistical difference, indicating the design of the PUSH sensor provided images of less gray level variation, while the PULL sensor provided images of larger pixel consistency during some of the data collection visits. The level of learning or habituation was also documented in this research through three metrics: task completion, Maximum User Effort (MUE), and the number of assists provided. All three reported the PUSH with the lowest rates, but improved the most over the visits, which was a function of learning how to use a “pushâ€-based swipe sensor, as opposed to the “pull†swipe type.
Overall the HBSI Evaluation Method provided the foundation for the future of biometric evaluations as it linked system feedback from erroneous interactions to the human-sensor interaction that caused the failure. This linkage will enable system developers and researchers the ability to re-examine the data to see if the errors are the result of the algorithm or human interaction that can be solved with revised training techniques, design modifications, or other adjustments in the future.
E-mail fraud has become very prevalent in cyberspace and is currently a major technique utilized by cyber criminals to swindle victims. E-mail fraud is a category of spam or unsolicited bulk e-mail [1]. Spam filter research has been very active in combating e-mail spam. Spam filter research ranges from statistical methods for text categorization to newer methods of defining user preference ontologies to classify incoming e-mails. Much of these methods have limitations or an upper bound where they can be bypassed by simply misspelling, manipulating, or rephrasing the text. The research proposed in this composition utilizes a new technique that uses a very powerful tool known as ontological semantics. Ontological semantics gives direct access to the texts meaning, which in turn will help accurately classify and categorize unsolicited bulk e-mails. This study will provide insight on less effective current spam filter techniques and discuss their limitations compared to the proposed method of an ontological spam filter.