A key issue in computer system security is to protect information against unauthorized access. Emerging workflow-based applications in healthcare, manufacturing, the financial sector, and e-commerce inherently have complex, time-based access control requirements. To address the diverse security needs of these applications, a Role Based Access Control (RBAC) approach can be used as a viable alternative to traditional discretionary and mandatory access control approaches. The key features of RBAC include policy neutrality, support for least privilege, and efficient access control management. However, existing RBAC approaches do no address the growing need for supporting time-based access control requirements for these applications.
This research presents a Generalized Temporal Role Based Access Control (GTRBAC) model that combines the key features of the RBAC model with a powerful temporal framework. The proposed GTRBAC model allows specification of a comprehensive set of time-based access control policies, including temporal constraints on role enabling, user-role and role-permission assignments, and role activities. The model provides an event-based mechanism for supporting dynamic access control policies, which are crucial for developing secure workflow-based enterprise applications. In addition, the temporal hierarchies and separation of duty constraints facilitated by GTRBAC allow the development of security policies for commercial enterprises. The thesis provides various design guidelines for managing complexity and building secure systems based on this model. X-GTRBAC, an XML-based policy language has been developed to allow specification of GTRBAC policies.
In this thesis, we investigate two routing problems. The first, which is known as the multiconstraint QoS (quality of service) routing problem, is to find a single path that satisfies multiple QoS constraints. For this problem, we consider two routing environments: (a) a given source node has detailed routing information provided by a link-state protocol, and (b) the source node has relatively simple routing information provided by a distance-vector protocol. First, we develop a greedy scheme, called MPLMR (Multi-postpath-based lookahead multiconstraint routing), for case (a). MPLMR has an efficient
An interactive conception of the psychological refractory period (PRP) effect is proposed on the basis of Hommel’s (1998) two-process approach to account for compatibility effects in the PRP task. The interactive conception account assumes that response selection has two components. One component is stimulus-response (S-R) translation, which can occur automatically and simultaneously for both tasks. The other component is final response selection, which is the locus of the bottleneck and can process only one task at a time. The account suggests that between-task crosstalk and noncurrent-task response association have strong impacts on S-R translation when there is a contingency between the two tasks. Six PRP experiments were conducted: The first three experiments contained no contingency between the two tasks, but the last three did. Greenwald and Shulman’s (1973) S-R compatible and ideomotor compatible tasks were used in Experiments 1-3, with both responses (R1 and R2 for Task 1 and Task 2, respectively) being required in Experiments 1-2 and only R2 in Experiment 3. The interactive conception predicts that the PRP effect should be obtained in Experiments 1 and 2 but not in Experiment 3 because the selection of R2 has to wait until the selection of R1 is completed. A PRP effect was evident in Experiments 1 and 2 but not Experiment 3. In Experiment 4, the dimensional overlap of the color between the first stimulus (S1) and the second one (S2) was manipulated and participants were instructed to respond to S2 only. A large PRP effect was obtained for the dimensional overlap condition and a small, but significant, PRP effect for the condition with no dimensional overlap. Experiments 5 and 6 examined the effect of S1-S2 correlation (high, low, and neutral), as well as spatial correspondence (R1-R2 correspondent and R1-R2 noncorrespondent) in Experiment 6, on the PRP effect. An overaddictive interaction of S1-S2 correlation and SOA was obtained for both experiments. A comparison between Experiment 5 and 6 showed no difference in the PRP effect obtained with the three levels of S1-S2 correlation. However, the effect of correlation tended to be larger at the short SOA in Experiment 6, in which spatial correspondence of responses was manipulated, than in Experiment 5, in which it was not. Results of these experiments are in agreement with the interactive conception of the PRP effect, in which the contingency between two tasks affects the S-R translation processing, which is distinct from the processing of final response selection.
Type systems in object-oriented systems are useful tools to ensure correctness, safety, and integration of programs. This thesis studies the matching of recursive interface types for the purpose of software-system integration and type inference for object types to help reduce bulky type information for programs with flexible type systems. We explore the problem of equality and subtyping of recursive types. Potential applications include automatic generation of bridge code for multi-language systems and type-based retrieval of software modules from libraries. We present efficient decision procedures for a notion of type equality that includes unfolding of recursive types, and associativity and commutativity of product types. Advocated by Auerbach, Barton, and Raghavachari, these properties enable flexible matching of recursive types. We also present results on type inference for object-oriented languages with flexible type systems including features such as read-only field and record concatenation. Read-only fields are useful in object calculi, pi calculi, and statically-typed intermediate languages because they admit covariant subtyping, unlike updateable fields. For example, Glew’s translation of classes and objects to an intermediate calculus places the method tables of classes into read-only fields; covariant subtyping on the method tables is required to ensure that subclasses are translated to subtypes. In programs that use updateable fields, read-only fields can be either specified or discovered. For both cases, we will show that type inference is equivalent to solving type constraints and computable in polynomial time. Record concatenation, multiple inheritance, and multiple-object cloning are closely related and part of various language designs. For example, in Cardelli’s untyped Obliq language, a new object can be constructed from several existing objects by cloning followed by concatenation; an error is given in case of field name conflicts. We will present a polynomial-time type inference algorithm for record concatenation, subtyping, and recursive types. Our example language is the Abadi-Cardelli object calculus extended with a concatenation operator. Our algorithm enables efficient type checking of Obliq programs without changing the programs at all.
A summary of the best methods for factoring integers know at the time of publication.
This is a clear, concise summary of the most important and useful parts of cryptanalysis.
We give an algorithm for computing formulas for Aurifeuillian factorzations and study the period of the Bell exponential integers modulo a prime number
A clear concise summary of calculations in number theory.
This paper introduces a novel method of rights protection for categorical data through watermarking. We discover new watermark embedding channels for relational data with categorical types. We design novel watermark encoding algorithms and analyze important theoretical bounds including mark vulnerability. While fully preserving data quality requirements, our solution survives important attacks, such as subset selection and data re-sorting. Mark detection is fully “blind” in that it doesn’t require the original data, an important characteristic especially in the case of massive data. We propose various improvements and alternative encoding methods. We perform validation experiments by watermarking the outsourced Wal-Mart sales data available at our institute. We prove (experimentally and by analysis) our solution to be extremely resilient to both alteration and data loss attacks, for example tolerating up to 80% data loss with a watermark alteration of only 25%.
Given a set of n intervals representing an interval graph, the problem of finding a maximum matching between pairs of disjoint (nonintersecting) intervals has been considered in the sequential model. In this paper we present parallel algorithms for computing maximum cardinality matchings among pairs of disjoint intervals in interval graphs in the EREW PRAM and hypercube models. For the general case of the problem, our algorithms compute a maximum matching in O(log3n) time using O(n/log2n) processors on the EREW PRAM and using n processor on the hypercubes. For the case of proper interval graphs, our algorithm runs in O(log n) time using O(n) processors if the input intervals are not given already sorted and using O(n/log n) processors otherwise, on the EREW PRAM. On n-processor hypercubes, our algorithm for the proper interval case takes O(log n log log n) time for unsorted input and O(log n) time for sorted input. Our parallel results also lead to optimal sequential algorithms for computing maximum matchings among disjoint intervals. In addition, we present an improved parallel algorithm for maximum matching between overlapping intervals in proper interval graphs.
The search for weak periodic signals in time series data is an active topic of research. Given the fact that rarely a real world dataset is perfectly periodic, this paper approaches this problem in terms of data mining, trying to discover approximate and partial periodicities, when no period length is known in advance. We utilize the autocorrelation function in order to extract partial periodicities from large time series. In existing time series mining algorithms, the period length is user-specified. We propose an algorithm that extracts a set of candidate periods featured in a time series that satisfy a minimum confidence threshold, by utilizing the autocorrelation function and FFT as a filter. We extend this technique for capturing approximate periodicities. We provide some mathematical background as well as experimental results.
The problem of efficient data structures for IP lookups has been well studied in literature. Techniques such as LC tries and Extensible Hashing are commonly used. In this paper, we address the problem of generalizing LC tries and Extensible Hashing, based on traces of past lookups, to provide performance guarantees for memory sub-optimal structures. As a specific example, if a memory-optimal (LC) trie takes 6MB and the total memory at the router is 8MB, how should the trie be modified to make best use of the 2 MB of excess memory? We present a greedy algorithm for this problem and prove that, if for the optimal data structure there are b fewer memory accesses on average for each lookup compared with the original trie, the solution produced by the greedy algorithm will have (9 x b)/22 fewer memory accesses on average (compared to the original trie). An efficient implementation of this algorithm presents significant additional challenges. We describe an implementation with a time complexity of O(ξ(d)n x log n) and a space complexity of O(n), where n is the number of nodes of the trie and d its depth. The depth of a trie is fixed for a given version of the Internet protocol and is typically O(log n). In this case, ξ(d) = O(log2n). We demonstrate experimentally the performance and scalability of the algorithm on actual routing data. We also show that our algorithm significantly outperforms Extensible Hashing for the same amount of memory.
The formidable dissemination capability allowed by the current network technology makes it increasingly important to devise new methods to ensure authenticity. Nowadays it is common practice to distribute documents in compressed form. In this paper, we propose a simple variation on the classic LZ-77 algorithm that allows one to hide, within the compressed document, enough information to warrant its authenticity. The design is based on the unpredictability of a certain class of pseudo-random generators, in such a way that the hidden data cannot be retrieved in a reasonable amount of time by an attacker (unless the secret bit-string key is known). Since it can still be decompressed by the original LZ-77 algorithm, the embedding is completely
Planar st-graphs find applications in a number of areas. In this paper we present efficient parallel algorithms for solving fundamental problems on planar st-graphs. The problems we consider include all-pairs shortest paths in weighted planar st-graphs, single-source shortest paths in weighted planar layered digraphs (which can be reduced to single-source shortest paths in certain special planar st-graphs), and depth-first search in planar st-graphs. Our parallel shortest path techniques exploit the specific geometric and graphic structures of planar st-graphs, and involve schemes for partitioning planar st-graphs into sub-graphs in a way that ensures that the resulting path length matrices have a monotonicity property [1], [2]. The parallel algorithms we obtain are a considerable improvement over the previously best known solutions (when they are applied to these st-graph problems), and are in fact relatively simple. The parallel computational models we use are the CREW PRAM and EREW PRAM.
Range queries are an important class of queries for several applications. can be achieved by tiling the multi-dimensional data and distributing it among multiple disks or nodes. It has been established that schemes that achieve optimal parallel block access exist only for a few special cases. Though several schemes for the allocation of tiles to disks have been developed, no scheme with non-trivial worst-case bound is known. We establish that any range query on a 2^q x 2^q-block grid of blocks can be performed using k = 2^t disks (t≤q), in at most [m/k] + O(log k) parallel block accesses. We achieve this result by judiciously distributing the blocks among the k nodes or disks. Experimental data show that the algorithm achieves very close to [m/k] performance (on average less than 0.5 away from [m/k], with a worst-case of 3). Although several declustering schemes for range queries have been developed, prior to our work no additive non-trivial performance bounds were known. Our scheme guarantees performance within a (small) additive deviation from [m/k]. Subsequent to this work, Bhatia et al. [4] have proved that such a performance bound is essentially optimal for this kind of scheme, and have also extended our results to the case where the number of disks is a product of the form k1*k2…*kt where kis need to all be 2.