Reports and Papers Archive
Website Forgery is a type of web based attack where the phisher builds a website that is completely independent or a replica of a legitimate website, with the goal of deceiving a user by extracting information that could be used to defraud or launch other attacks upon the victim. In this paper we attempt to identify the different types of website forgery phishing attacks and non-technical countermeasure that could be used by users, (mostly by non IT users) that lack the understanding of how phishing attack works and how they can prevent themselves from these criminals.
In this paper I reviewed the literature concerning investigator digital forensics models and how they apply to field investigators. A brief history of community supervision and how offenders are supervised will be established. I also covered the difference between community supervision standards and police standards concerning searches, evidence, standards of proof, and the difference between parole boards and courts. Currently, the burden for digital forensics for community supervision officers is placed on local or state law enforcement offices, with personnel trained in forensics, but may not place a high priority on outside cases. Forensic field training for community supervision officers could ease the caseloads of outside forensic specialists, and increase fiscal responsible by increasing efficiency and public safety in the field of community supervision.
In this paper, we compare, analyze and study the behavior of a malware processes within both Type 1 & Type 2 virtualized environments. In other to achieve this we to set up two different virtualized environments and thoroughly analyze each malware processes behavior. The goal is to see if there is a difference between the behaviors of malware within the 2 different architectures. At the end we achieve a result and realized there is no significant difference on how malware processes run and behave on either virtualized environment. However our study is limited to basic analysis using basic tools. An advance analysis with more sophisticated tools could prove otherwise.
We have seen an evolution of increasing scale and complexity of enterprise-class distributed applications, such as, web services for providing anything from critical infrastructure services to electronic commerce. With this evolution, it has become increasingly difficult to understand how these applications perform, when do they fail, and what can be done to make them more resilient to failures, both due to hardware and due to software? Application developers tend to focus on bringing their applications to market quickly without testing the complex failure scenarios that can disrupt or degrade a given web service. Operators configure these web services without the complete knowledge of how the configurations interact with the various layers. Matters are not helped by ad hoc and often poor quality failure logs generated by even mature and widely used software systems. Worse still, both end users and servers sometime suffer from “silent problems” where something goes wrong without any immediate obvious end-user manifestation. To address these reliability issues, characterizing and detecting software problems with some post-detection diagnostic-context is crucial. ^ This dissertation first presents a fault-injection and bug repository-based evaluation to characterize silent and non-silent software failures and configuration problems in three-tier web applications and Java EE application servers. Second, for detection of software failures, we develop simple low-cost application-generic and application-specific consistency checks, while for duplicate web requests (a class of performance problems), we develop a generic autocorrelation-based algorithm at the server end.Third, to provide diagnostic-context as a post-detection step for performance problems, we develop an algorithm based on pair-wise correlation of system metrics to diagnose the root-cause of the detected problem. ^
The need to ensure the primary functionality of any system means that considerations of security are often secondary. Computer security considerations are made in relation to considerations of usability, functionality, productivity, and other goals. Decision-making related to security is about finding an appropriate tradeoff. Most existing security mechanisms take a binary approach where an action is either malicious or benign, and therefore allowed or denied. However, security and privacy outcomes are often fuzzy and cannot be represented by a binary decision. It is useful for end users, who may ultimately need to allow or deny an action, to understand the potential differences among objects and the way that these differences are communicated matters. ^ In this work, we use machine learning and feature extraction techniques to model normal behavior in various contexts and then used those models to detect the degree that new behavior is anomalous. This measurement can then be used, not as a binary signal but as a more nuanced indicator that can be communicated to a user to help guide decision-making. ^ We examine the application of this idea in two domains. The first is the installation of applications on a mobile device. The focus in this domain is on permissions that represent capabilities and access to data, and we generate a model for expected permission requests. Various user studies were conducted to explore effective ways to communicate this measurement to influence decision-making by end users. Next, we examined to the domain of insider threat detection in the setting of a source code repository. The goal was to build models of expected user access and more appropriately predict the degree that new behavior deviates from the previous behavior. This information can be utilized and understood by security personnel to focus on unexpected patterns.^
One major impediment to large-scale use of cloud services is concern for confidentiality of the data and the computations carried out on it. This dissertation advances the state of art for secure and private outsourcing to untrusted cloud servers by solving three problems in the computational outsourcing setting and extending the semantics of oblivious storage in the storage outsourcing setting. ^ In computational outsourcing, this dissertation provides protocols for two parties to collaboratively design engineering systems and check certain properties of the codesigning system with the help of a cloud server, without leaking the designing parameters to each other or to the server. It also provides approaches to outsource two computationally intensive tasks, image feature extraction and generalized matrix multiplication, preserving the confidentiality of both the input data and the output result. Experiments are included to demonstrate the viability of the protocols. ^ In storage outsourcing, this dissertation extends the semantics of the oblivious storage scheme by providing algorithms to support nearest neighbor search. It enables clients to perform nearest neighbor queries on the outsourced storage without leaking the access pattern.^
Meaning-Based Machine Learning (MBML) is a research program intended to show how training machine learning (ML) algorithms on meaningful data produces more accurate results than that of using unstructured data.
Security for public cloud providers is an ongoing concern. Programs like FedRAMP look to certify a minimum level of compliance. This project aims to build a tool to help decision makers compare different clouds solutions and weigh the risks against their own organizational needs.
Our goal is to improve the detection of phishing attack emails by using natural language processing (NLP) technology that models the semantic meaning behind the email text.
In this paper we identified and addressed some of the key challenges in digital forensics. An intensive review was conducted of the major challenges that have already been identified. At the end, the findings proposed a solution and how having a standardized body that governs the digital forensics community could make a difference.
As AMI is deployed throughout the power grid, identifying the attack surface is a necessary step in achieving cyber security in smart grids and AMI. An important first step to attaining cyber security is to define and illustrate the Cyber Attack Surface with respect to hardware and network configurations, protocols, and software.
The use of deception to enhance security has showed promising result as a defensive technique. In this paper we present an authentication scheme that better protects users’ passwords than in currently deployed password-based schemes, without taxing the users’ memory or damaging the user-friendliness of the lo- gin process. Our scheme maintains comparability with traditional password- based authentication, without any additional storage requirements, giving service providers the ability to selectively enroll users and fall-back to traditional methods if needed. The scheme utilizes the ubiquity of smartphones; however, unlike previous proposals it does not require registration or connectivity of the phones used. In addition, no long-term secrets are stored in any user’s phone, mitigating the consequences of losing it. Our design significantly increases the difficulty of launching a phishing attack by automating the decisions of whether a website should be trusted and introducing additional risk at the adversary side of being detected and deceived. In addition, the scheme is resilient against Man-in-the-Browser (MitB) attacks and compromised client machines. We also introduce a covert communication between the user’s client and the service provider. This can be used to covertly and securely communicate the user context that comes with the use of this mechanism. The scheme also incorporate the use of deception that make it possible to dismantle a large-scale attack infrastructure before it succeeds. As an added feature, the scheme gives service providers the ability to have full-transaction authentication.
In this work we present a simple, yet effective and practical, scheme to improve the security of stored password hashes rendering their cracking detectable and insuperable at the same time. We utilize a machine-dependent function, such as a physically unclonable function (PUF) or a hardware security module (HSM) at the authentication server. The scheme can be easily integrated with legacy systems without the need of any additional servers, changing the structure of the hashed password file or any client modifications. When using the scheme the structure of the hashed passwords file, etc/shadow or etc/master.passwd, will appear no different than in the traditional scheme.1 However,when an attacker exfiltrates the hashed passwords file and tries to crack it, the only passwords he will get are the ersatzpasswords — the “fake passwords”. When an attempt to login using these ersatzpasswords is detected an alarm will be triggered in the system that someone attempted to crack the password file. Even with an adversary who knows the scheme, cracking cannot be launched without physical access to the authentication server. The scheme also includes a secure backup mechanism in the event of a failure of the hardware dependent function. We discuss our implementation and provide some discussion in comparison to the traditional authentication scheme.
This paper explores the security of WinRAR encrypted archives. Previous works concerning potential attacks against encrypted archives are studied and evaluated for practical implementation. These attacks include passive actions examining the effects of compression ratios of archives and the files contained, the study of temporary artifacts and active man-in-the-middle attacks on communication between individuals. An extensive overview of the WinRAR software and the functions implemented within it is presented to aid in understanding the intricacies of attacks against archives.
Several attacks are chosen from the literature to execute on WinRAR v5.10. Select file types are identified through the examination of compression ratios. The appearance of a file in an archive is determined through both the appearance of substrings in the known area of an archive and the comparison of compression ratios.
Finally, the author outlines a revised version of an attack that takes advantage of the independence between the compression and encryption algorithms. While a previous version of this attack only succeeded in removing the encryption from an archive, the revised version is capable of fully recovering an original document from a encrypted compressed archive. The advantages and shortcomings of these attacks are discussed and some countermeasures are briefly mentioned.